The raw-experience dogma: Dissolving the “qualia” problem
post by metaphysicist · 2012-09-16T19:15:13.794Z · LW · GW · Legacy · 341 commentsContents
1. Defining the problem: The inverted spectrum A. Attempted solutions to the inverted spectrum. B. The “substitution bias” of solving the “easy problem of consciousness” instead of the “hard problem.” 2. The false intuition of direct awareness A. Our sense that the existence of raw experience is self-evident doesn’t show that it is true. B. Experience can’t reveal the error in the intuition that raw experience exists. C. We can’t capture the ineffable core of raw experience with language because there’s really nothing there. D. We believe raw experience exists without detecting it. 3. The conceptual economy of qualia nihilism pays off in philosophical progress 4. Relying on the brute force of an intuition is rationally specious. None 341 comments
1. Defining the problem: The inverted spectrum
A. Attempted solutions to the inverted spectrum.
B. The “substitution bias” of solving the “easy problem of consciousness” instead of the “hard problem.”
2. The false intuition of direct awareness
A. Our sense that the existence of raw experience is self-evident doesn’t show that it is true.
B. Experience can’t reveal the error in the intuition that raw experience exists.
C. We can’t capture the ineffable core of raw experience with language because there’s really nothing there.
D. We believe raw experience exists without detecting it.
3. The conceptual economy of qualia nihilism pays off in philosophical progress
4. Relying on the brute force of an intuition is rationally specious.
Against these considerations, the only argument for retaining raw experience in our ontology is the sheer strength of everyone’s belief in its existence. How much weight should we attach to a strong belief whose validity we can't check? None. Beliefs ordinarily earn a presumption of truth from the absence of empirical challenge, but when empirical challenge is impossible in principle, the belief deserves no confidence.
341 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2012-09-14T10:32:34.331Z · LW(p) · GW(p)
Umm. Am I misunderstanding something, or is this post saying that we should "solve" the problem of qualia by accepting that we're all p-zombies?
Replies from: Eliezer_Yudkowsky, metaphysicist, metaphysicist, Pentashagon, hankx7787↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-15T03:41:22.135Z · LW(p) · GW(p)
From the standpoint of somebody feeling confused about qualia, the trouble with this solution is not that it is necessarily false but that accepting it doesn't make you feel any less confused.
Replies from: MrMind↑ comment by MrMind · 2012-09-29T17:18:00.208Z · LW(p) · GW(p)
I think that's because qualia, if they exists, have no correlation with physical world, hence we cannot convey information about them with physical means. P-zombies would talk about qualia in the same terms we do, which is actually their point. The only way to 'solve' by physical means the qualia problem is therefore accept it's inexistance and trying to understand why we think there's a problem in the first place: because every other coherent solution must produce the physical effect of this solution.
Replies from: TAG↑ comment by metaphysicist · 2012-09-24T04:05:31.618Z · LW(p) · GW(p)
me: A "p-zombie" "behaves" the same way we do, but does a p-zombie believe it has qualitative awareness?
To be precise about the value of the belief/intuition concept in accounting for the illusion that qualia exist—one defect in the zombie thought experiment is that it prompts the attitude: maybe I can't prove that you're not a zombie, but I sure as hell know I'm not one!
The zombie experiment imposes a consistent outside view; it seems to deny the evidence of "personal experience" by fiat—because it simply doesn't address what it would feel like to be a zombie.
So, the zombie experiment seems to show that people might not be able to distinguish zombies from humans; but invoking the beliefs held by the "zombie" shows from the inside that being a zombie can be no different from being a human: the two are subjectively indistinguishable.
To address your question directly: the ordinary zombie thought experiments purport to show that without qualia humans would be zombies; whereas when you allow zombies' (false) beliefs (in ineffable perceptual essences), the thought experiment shows that zombies are really humans.
↑ comment by metaphysicist · 2012-09-23T19:40:02.013Z · LW(p) · GW(p)
You may be omitting or misunderstanding the role of the concept of belief in my account. The role of that concept is original in this account (and novel, to the best of my less-than-comprehensive knowledge).
A "p-zombie" "behaves" the same way we do, but does a p-zombie believe it has qualitative awareness? If it does, then there's no distinction between humans and p-zombies, but the antimaterialists who came up with the p-zombie thought experiment were of the persuasion that belief is as meaningless a concept for materialists as is qualia; both were then derogated by the reigning behaviorists as "mentalistic" concepts, hence illicit. The Churchlands are eliminitivist about all "folk psychological" concepts like belief; Dennett doesn't apply the concept of belief to the problem of qualia. But qualia proponents make belief dependent on qualitative awareness: eliminating qualia does preclude deriving knowledge (a kind of belief) from conscious sensation.
On my account, what dissolves the problem of qualia is recognizing that the only "evidence" favoring their existence is our sense of certainty favoring our sequestered belief that they exist. (See 3.C. in OP.)
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-07-08T14:24:37.394Z · LW(p) · GW(p)
But I am a falliblist about my qualia.....
↑ comment by Pentashagon · 2012-09-18T18:35:49.735Z · LW(p) · GW(p)
I think the argument actually implies that p-zombies don't exist and therefore anything acting human is going to feel human from the inside. There isn't something special called "raw-experience" that we happen to have but that a p-zombie could not have.
We experience things in our mind, but reductionism implies that this experience has direct physical causes and effects and is therefore understandable and explainable by rational science. The experience of "red" has a specific physical description for each individual and, while it may be possible that two people disagree about whether a particular thing is "red", they could in principle study their brains until they found the precise points where their experiences/definitions diverged.
In practice, however, there is still a very strong sense of private language existing. We do not yet have the ability to reduce our internal experience into physical cause and affect, and so we have no way to truly understand how other people feel and experience. For instance, I could not adequately describe "red" to a blind person and a person who can see into the ultraviolet and infrared spectrum could not explain the colors "ultraviolet" or "infrared" to me. We lack a shared sensory framework, and further lack a shared mental model of ourselves that can understand what experience is like and therefore think accurately about what someone else actually experiences. For standard humans in the past it's arguable that private language actually existed. In the 21st century we have a chance to see private language dictionaries in our lifetimes.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-07-08T14:06:49.535Z · LW(p) · GW(p)
Theres a difference between causation and reduction. The idea that qualia have physical causes is compatible with dualism, the idea that they are not ieducible to physics.
Knowing what causes non standard qualia, or where they diverge, still doesn't tell you how non standard qualia feel to the person having them.
For that reason, we are not going to have private language dictionaries any time soon. Looking at brain scans of someone with non standard qualia us not going to tell me what their qualia are as qualia.
Replies from: Pentashagon↑ comment by Pentashagon · 2014-07-09T02:41:50.329Z · LW(p) · GW(p)
Granted; we won't have definitive evidence for or against dualism until we're at the point that we can fully test the (non-) reductive nature of qualia. If people who have access to each other's private language dictionaries still report meta-feeling that the other person feels different qualia from the same mental stimulus then I'll have more evidence for dualism than I do now. True, that won't help with incomparable qualia, but it would be kind of...convenient...if the only incomparable qualia are the ones that people report feeling differently.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-07-09T12:36:34.036Z · LW(p) · GW(p)
We are not going to have private language dictionaries.
↑ comment by hankx7787 · 2012-09-14T21:51:08.825Z · LW(p) · GW(p)
Yeah, exactly. It sounds like he's denying experience exists or saying that it's illusory, which would be stupid. Experience is an epistemological first principle; it's axiomatic. The "solution" isn't to try to deny experience is real, the solution is to explain it (reduce it, ahem)) as a physical process. I would agree that once you reduce it to a physical explanation there's nothing left over to explain, if that's ultimately the point he was trying to make (although it doesn't sound like it).
Replies from: None, wedrifid, TheAncientGeek↑ comment by [deleted] · 2012-09-22T14:37:37.121Z · LW(p) · GW(p)
I would agree that once you reduce it to a physical explanation there's nothing left over to explain, if that's ultimately the point he was trying to make
I am fairly confident that that is, in fact, the point he was trying to make. Your reaction is familiar to me. I could be wrong, but the reason that this argument sounds absurd to you may be that you already agree with it, but haven't noticed that you agree with it because you don't seriously entertain any of the alternatives. I'm not criticizing you with this suggestion, let me explain.
The dualist position holds that experience is a non-physical thing that cannot be explained physically: the sensation of burning your hand on a stove-top is not identical to the behaviour of a heat-damaged nervous system, rather, there is a thing that is "the quale of pain", which you happen to be exposed to whenever your physical body is hurt. This is where the "inverted spectrum" argument comes in - how do you know that the colour-quale that I sample when I look at the sky is the same as your "blue"? Sure, I call it blue, but of course I would call it that because I learned what blue meant from being told "blue is the colour of a cloudless daytime sky".
Metaphysicist is making the case that qualia don't exist, and that instead every experience/sensation is reducible to physics. "Experiencing blue" is just what your brain does when it is exposed to a certain wavelength of light. But if you're already operating under that assumption and haven't considered and discarded the possibility of qualia, it can look as though the anti-qualia argument is an anti-experience argument, because it is clearly an argument that some putative element of experience does not exist. If your metaphysics only contains one putative element of experience, that's confusing.
My reaction to the anti-qualia argument was initially the same as yours. Then someone explained what I'd missed, and my second reaction was "seriously, we need an argument for that?" Then I read Wittgenstein, and for a long time it seemed like he was just flailing ineffectually at the problem. The tipping point was when I came to realise that the monist/dualist dichotomy isn't just an opinion that people hold, it's one of the hinges of their opinion-having machinery. Convincing them isn't simply a matter of giving the machine the right arguments as inputs - that would be like trying to use a pipe organ as a calculator. Instead, you have to hack their brain - present a series of inputs that changes how they have opinions, rather than just changing what opinions they have. That's what I think Wittgenstein was trying to do, and I think it has worked on me. Unfortunately, for me and most others, Wittgenstein isn't a fast-acting treatment.
Anyway, I've gone a bit off topic here. It's possible that I'm completely wrong about how you've interpreted Metaphysicist's post, if so, sorry for wasting your time. Only trying to help!
Replies from: Mitchell_Porter, bogus↑ comment by Mitchell_Porter · 2012-09-22T16:58:09.452Z · LW(p) · GW(p)
"Experiencing blue" is just what your brain does when it is exposed to a certain wavelength of light.
Sophistry. It's madness to say that the blue isn't actually there. But this is tempting for people who like the science we have, because the blue isn't there in that model of reality.
What we need is a model of reality in which experiences are what they are, and in which they play the causal role they appear to play. If our current physical ontology has no room for the existence of an actually blue experience in the brain, so much the worse for our current physical ontology. But modern physics is mathematical and operational, there is plenty of opportunity for something to actually be a conscious experience, while appearing in the formal theory as a state or entity with certain abstractly characterized structural and algebraic properties.
Replies from: None, metaphysicist↑ comment by [deleted] · 2012-09-22T17:50:25.227Z · LW(p) · GW(p)
Sophistry. It's madness to say that the blue isn't actually there. But this is tempting for people who like the science we have, because the blue isn't there in that model of reality.
Ah, no. See, I am absolutely not saying that the blue isn't there. I agree that would be madness - I've experienced blue a million times. What I'm saying is this:
During the times when your brain is in the "blue state" you also happen to be experiencing the sensation of blueness. Same goes for the sensation of pain and the brain state associated with pain. In fact, this partnership between brain-state and perception is so reliable that we're getting close to being able to record people's thoughts in video format by scanning their heads. (http://www.youtube.com/watch?v=nsjDnYxJ0bo&feature=player_embedded)
The question is, if our model allows us to predict people's sensory experiences perfectly well on the basis of purely physical phenomena, why do we need posit qualia? Seems to me that the simplest theory that describes all the data is that causal relationships between physical things are the only things that exist in this universe. If you throw out the premises that sensations aren't physical things and that physical things aren't sensations then it suddenly seems like the most natural conclusion in the world, and I've never seen any evidence that prompted me to hold onto either of those premises.
Here's a query - what did it feel like the last time you didn't have a brain state? Obviously that's a stupid question, it's impossible for you to have a brain without having it be in one state or another, and you don't have any memories from before you had a brain. Similarly, by definition you can't remember what it was like the last time you were experiencing absolutely nothing (if there was something to remember then you would have been experiencing something). So what piece of evidence was it that prompted you to hypothesise the existence of qualia?
Replies from: Mitchell_Porter, TheAncientGeek↑ comment by Mitchell_Porter · 2012-09-22T22:30:44.267Z · LW(p) · GW(p)
"Qualia" is just a new word for what used to be meant by the word "sensations", before "sensation" was redefined to mean "a type of brain process". The idea that sensory qualities like color are in the sensations, and hence in us, has been around for hundreds of years - thousands, if you count Democritus.
The problem with the modern redefinition of "sensation" as "brain process" is now that color is nowhere at all, inside or outside the brain. Or, more precisely, it substitutes a particular theory of what a sensation is (brain event) for the thing itself (experience of a sensory quality) in a way which allows the latter to be ignored or even denied.
On this issue most materialists are dualists - property dualists - without even noticing it. The problem is very simple. Physics, and hence natural science, is based on a model of the world in which all that exists are fundamental entities (particles, wavefunctions, etc) which do not possess the "secondary sensory qualities" like color, either individually or in combination. There is a disjunction between the properties posited by physics and the properties known in experience.
There are three known ways of dealing with this while still believing in physics. You say that both properties exist - property dualism. You say that only physical properties exist - total "eliminativism". Or you say that the experiential properties are directly playing a role in physics - which is best known via panpsychism, but one might doubt the necessity to regard everything ("pan") as "mental" ("psych").
Most materialists are property dualists because they say that only atoms exist, but then they think of their experience as how it feels to be a particular arrangement of atoms, when there is no such property in physics. It's an extra thing being tacked onto the physics. And the realization that this is dualism is somehow pushed away by the use of a locution like "experiencing blue" - e.g. "my current state includes the property that I am experiencing blue" - which buries the fact that the sensation itself has the property of being blue.
It's the fact that something is blue, which is why "qualia" have to be "posited".
Replies from: None↑ comment by [deleted] · 2012-09-23T04:24:43.756Z · LW(p) · GW(p)
It's only really your second paragraph that I disagree with. I'm a panpsychist, but I don't often mention it because a lot of people take that to mean "I believe that everything in the universe has a mind, including rocks and stars".
I go even further than you, though. I think that even of the materialists who aren't accidental/secret property dualists, most of them are still dualists without realising it. The idea that there are physical objects which are related to one another causally is inherently dualist because it theorises two types of things in the universe - physical objects and causal relations. More importantly, the idea of physical objects as distinct from causal relationships is dodgy, because it opens us up to Humean skepticism: we never see the objects themselves, just detect them by their causal relationships to us, so how do we know what they're actually like? All of the properties we associate with physical objects are products of their causal relationships with other matter, so separating the universe into physical things and causal relations paints us into the corner of believing in things which have no properties at all - a propertyless substrate a la the Scholastics.
The only hard and fast way to have a dualism-proof materialism that I'm comfortable with is to hold that objects are just clumps of causal relations. An electron isn't a tiny little ball of substrate to which the properties of mass and charge and spin adhere, rather it's just a likelihood that other particles in a given region will be affected by mass and charge and spin in an electron-like way. And that's how I can be a panpsychist: all causal relations are equal. The only thing different about the ones in our heads is that they're intricately interrelated in such a way that they're self-referential, sensitively dependent on outside conditions, and persistent in a way that means that present interactions can recall interactions that happened years in the past (memory). The sensation of being alive is just what it feels like to be a really complex web of causal relations, and when this web reacts slightly to outside stimuli, that sensation changes slightly to, say, "the sensation of being alive and seeing the colour blue". This is why I say that panpsychism isn't the same as believing that rocks are conscious - consciousness is a special, complex type of causal relation, a sub-category into which inanimate objects don't fit unless you spend a lot of time and energy constructing an AI out of them.
↑ comment by TheAncientGeek · 2014-07-08T14:15:52.328Z · LW(p) · GW(p)
The question is, if our model allows us to predict people's sensory experiences perfectly well on the basis of v purely physical phenomena, why do we need posit qualia?
We cant predict experienes perfectly well, because can't predict novel experiences, because we cant describe novel experiences, because we can't describe (as opposed to label) non novel experiences.
↑ comment by metaphysicist · 2012-09-23T20:03:12.759Z · LW(p) · GW(p)
Sophistry. It's madness to say that the blue isn't actually there. But this is tempting for people who like the science we have, because the blue isn't there in that model of reality.
If by blue you mean--as you do--the purely subjective aspect of perceiving the color blue (call that "blue"), then it's only madness to deny it exists if you insist on confusing blue with "blue." No one but a madman would say blue doesn't exist; no philosopher should be caught saying "blue" exists.
If you can show a causal role for pure experience, that would be something else, but instead you speak of the "causal role they appear to play." But we don't want a theory where things play the role they "appear" to play; the illusion of conscious experience includes the seemingness that qualia play a causal role (Added: as I explain in my account of the related illusion of "free will."
In short, it just won't do to call qualia nihilism "madness," when you offer no arguments, only exasperation.
But modern physics is mathematical and operational, there is plenty of opportunity for something to actually be a conscious experience, while appearing in the formal theory as a state or entity with certain abstractly characterized structural and algebraic properties.
This simply doesn't solve the problem; not in the least. If you posit abstractly characterized structural entities, you are still left with the problem regarding what makes that configuration give the appearance "blue." You're also left with the problem of explaining why evolution would have provided a means of registering these "abstractly characterized structural and algebraic properties" when they make no difference for adaptation.
My guess, you espouse an epistemology that makes sense data necessary. Completely freeing epistemology from sensationalism is virtue rather than vice: philosophers have been looking for a way out of sensationalism since Karl Popper's failed falsificationism.
You need an argument better than alleging madness. Many things seem blatantly wrong before one reflects on them.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-09-23T22:14:37.609Z · LW(p) · GW(p)
If you can show a causal role for pure experience, that would be something else, but instead you speak of the "causal role they appear to play."
I was actually talking more about the deduction that experiences are causally downstream from physical stimulation of sense organs, and causally upstream from voluntary motor action. This deduction is made because the physical brain is in that position; the physical causal sequence matches up with the subjectively conceived causal sequence "influences from outside me -> my experiences -> my actions"; so one supposes that experiences are in the brain and relevant to "physical" causality.
If you posit abstractly characterized structural entities, you are still left with the problem regarding what makes that configuration give the appearance "blue."
To say that these entities have abstract structure, is not to say that that is the whole of their being. I am only emphasizing how qualia, and things made out of qualia, can be part of a mathematically characterized fundamental physics. The mathematical theory would talk about a causal network of basic objects characterized with the abstruseness typical of such theories - e.g. as combinations of elements of an algebra - and some of those objects would in reality be qualia.
If you were then to ask "what makes one of those objects blue? what makes it look blue?" - those are questions which could not be answered solely on the mathematical level, which doesn't even talk about blue, only about abstracted structural properties and abstracted causal roles. They could only be tackled in a fuller ontological context, where you say "this entity from the theory is an experience, this property is the property of being blue, this process is the experiencing of blue", and so on.
It's like the difference between doing arithmetic and talking about apples. You can count apples, and numbers can be calculational proxies for groups of apples, but apples aren't numbers and talking about numbers isn't really the same thing as talking about apples. These abstracted propositions would only belong to the mathematical part of a theory of causally efficacious physical qualia, and that's not the whole theory, in the same way that arithmetic statements about how many apples I have, are not my whole "theory of apples".
The "non-mathematical part" doesn't just include a series of verbal stipulations that "these abstractly characterized entities are 'experiences'". It implicitly also includes a bit of phenomenology: you would need to be able to single out various aspects of your own experience, and know that those are what is meant by the corresponding terms in the theory. You should be able to look at something blue and think, "OK, that's blue, that's property X from the formalism, and my awareness of blue, that's property X'...", and so on, for as far as theory and thought can take you.
That is a long-term ideal for a physical theory of consciousness; nothing we have right now measures up.
↑ comment by bogus · 2012-09-22T15:18:09.793Z · LW(p) · GW(p)
The dualist position holds that experience is a non-physical thing that cannot be explained physically:
Yes. I think this is actually due to a confusion between something physical and something that is explained from an objective POV. Subjective experience is pretty much unique in that it is never observed by anyone other than the subject - but something can be non-objective, and still be a part of a "web of causal relations", which we call the physical world.
Replies from: None↑ comment by [deleted] · 2012-09-22T16:23:02.783Z · LW(p) · GW(p)
Agreed. The really annoying part is that because, as you say:
Subjective experience is pretty much unique in that it is never observed by anyone other than the subject
It's very difficult to point to evidence that subjective experience is just private by definition (as in, if it wasn't uniquely yours it wouldn't be subjective), rather than being private by virtue of having some special super-physical status that makes it impossible to share. The two theories predict the same experimental results in pretty much all cases.
Replies from: bogus↑ comment by bogus · 2012-09-22T16:50:54.318Z · LW(p) · GW(p)
I think that saying "subjective experience is private" can be rephrased as saying that "our ability to describe reality/the physical world is clearly incomplete". Dualism happens when folks use the Typical Mind Fallacy to convert this fact about how we describe reality into an actual split between "physical stuff" and "the non-physical" that is held to be always true, regardless of the observer.
Replies from: None↑ comment by [deleted] · 2012-09-22T17:05:26.371Z · LW(p) · GW(p)
Ah, now see there I think I disagree a little. I think saying "subjective experience is private" is just expressing an analytic truth. We define subjective experience as being experience as it occurs to an individual, and therefore subjective experience can only be known by the individual. This is not to say that people's experiences can't be identical to one another, rather it just says that my experiences can't be your experiences because if they were they'd be your experiences and not my experiences. So saying "subjective experience is private" doesn't tell us anything new if we already knew what subjective experience was.
The mistake comes when people look for an explanation for why they experience their own sensations but have to hear about other people's second hand. You don't need an explanation for this, it's necessarily true!
Of course I might have misunderstood you. If so, sorry.
Replies from: bogus↑ comment by bogus · 2012-09-22T17:21:20.116Z · LW(p) · GW(p)
I think saying "subjective experience is private" is just expressing an analytic truth.
I'm not sure this is right, actually. Consider a least convenient case: a world populated by conscious beings (such as AI's) whose subjective experience is actually made up of simple numbers, e.g bytes stored in a memory address space. (Of course this assumes that Platonic numbers actually exist, if only as perceived by the AI's. Let's just concede this for the sake of argument.) Suppose further that any AI can read every other AI's memory. Then the AI's could know everything there is to know about each other's experiences, yet any one experience is still "subjective" in a sense, because it is associated with a single individual.
Replies from: None, Peterdjones↑ comment by [deleted] · 2012-09-22T17:58:55.853Z · LW(p) · GW(p)
I think that if the AI read one another's memory by copying the files across and opening them with remember.exe, then reading another AI's memory would feel like remembering something that happened to the reader. In that case there would be no subjective experience, because Argency.AI would be able to relive Bogus.AI's memories as though they were his own - experiences would be public, objective.
Alternatively, if the AI just look at each other's files and consciously interpret them as I might interpret words that you had written on a page describing an experience, they're in exactly the same circumstances as us, in which case I think my earlier argument holds.
↑ comment by Peterdjones · 2012-09-23T14:36:45.198Z · LW(p) · GW(p)
But such experiences still aren't subjective in the sense of "private". I don't see what you are getting at. If subjective=private, your AIs don't have subjective experience. Setting up another definition of subjective doesn't stop subjective=private from being analytically true or true at all. There are lots of things associated with individauls, such as names, which are not subjective.
↑ comment by wedrifid · 2012-09-16T06:07:18.814Z · LW(p) · GW(p)
Yeah, exactly. It sounds like he's denying experience exists or saying that it's illusory, which would be stupid. Experience is an epistemological first principle; it's axiomatic.
Why would I make 'experience' a first principle or an axiom? That sounds utterly impractical and inefficient.
Replies from: hankx7787↑ comment by hankx7787 · 2012-10-23T23:44:07.940Z · LW(p) · GW(p)
Upon reflection I think you are right in one tangential respect - characterizing experience as "axiomatic" was a poor choice of words. For a good rationalist nothing is axiomatic, i.e. with the right data you could convince me that 2+2=3 or that A is not-A.
Nevertheless, the existence and validity of your experience as such (not to confuse this with your interpretation or memory of your experience or anything else), is an incredibly fundamental truth that has been confirmed repeatedly and never disconfirmed across a vast scope of contexts (all of them actually) and is relied upon by all other knowledge. So saying that making experience a first principle or axiom is "impractical and inefficient" is rather bizarre, unless you're talking about something completely different than I am.
↑ comment by TheAncientGeek · 2014-07-08T14:09:14.071Z · LW(p) · GW(p)
That amounts to saying that if you solve the hard problem, then there is no longer a hard problem.
It doesn't actually deliver a solution.
comment by Peterdjones · 2012-09-14T18:23:22.516Z · LW(p) · GW(p)
"C. We can’t capture the ineffable core of raw experience with language because there’s really nothing there. One task in philosophy is articulating the intuitions implicit in our thinking, and sometimes rejecting the intuition should result from concluding it employs concepts illogically. What shows the intuition of raw experience is incoherent (self-contradictory or vacuous) is that the terms we use to describe raw experience are limited to the terms for its referents; we have no terms to describe the experience as such, but rather, we describe qualia by applying terms denoting the ordinary cause of the supposed raw experience."
That's an over-generalisation from colour. Pain is a textbook example of a quale, and "pain" describes an effect, a reaction, not a cause, which would be something like "sharp" or "hot". Likewise, words for tastes barely map onto anything object. "Sweet" kind of means "high in calories", but kind of doens't, since saccharine is thousands of times sweeter than sugar, but not thousands of times more calorific. And so on.
" The simplest explanation for the absence of a vocabulary to describe the qualitative properties of raw experience is that they don’t exist: a process without properties is conceptually vacuous."
The simplest explanation for the universe is that it doesn't exist. It's not popular, because the universe seems to exist. Explanations need to be adeqaute to the facts, not just simple.
There is a perspective from which it is surpising we can describe anything that it going on in our heads. Billions of neurons must churn data in considerable excess of gigabits per second, but speech has a bandwidth of only a few bits per second. So the surprise is that some things, chiefly discursive thought, are expressible at all. Although that is not really a surpise, since we can easily account for it on the assumption that discursive thought is internalised speech.
Since the inexpressibility of qualia can be accounted for given facts about the limited bandwidth of speech, it does not need to be accounted for all over again on the hypothesis that qualia don't exist.
"D. We believe raw experience exists without detecting it. One error in thinking about the existence of raw experience comes from confusing perception with belief, which is conceptually distinct. When people universally report that qualia “seem” to exist, they are only reporting their beliefs—despite their sense of certainty."
Beliefs about what? It might be just about credible that other people are p-zombies, with no qualia, but with a mistaken belief that they have qualia. However, it is much harder for me to persuade myself that I am a zombie. When I look at a Muler-Lyer illusion, I have a (cognitive, non perceptual) belief that the lines are the same length, but I will also report that they look different. That second belief is not a belief about belief, it is a belief about how things look.
" Where “perception” is defined as a nervous system’s extraction of a sensory-array’s features, people can’t report their perceptions except through beliefs the perceptions sometimes engender: I can’t tell you my perceptions except by relating my beliefs about them. This conceptual truth is illustrated by the phenomenon of blindsight, a condition in patients report complete blindness yet, by discriminating external objects, demonstrate that they can perceive them. Blindsighted patients can report only according to their beliefs, and they perceive more than they believe and report that they perceive. Qualia nihilism analyzes the intuition of raw experience as perceiving less than you believe and report you perceive, the reverse of blindsight."
I don't see where you are going with that. Unless your "less" amounts to a zero, that doesn't amount to nihiism. Having some qualia, but less than we previosously thought, raises the same problems.
"3. The conceptual economy of qualia nihilism pays off in philosophical progress Eliminating raw experience from ontology produces conceptual economy."
So does eliminating matter in favour of free-floating mental content, as do idealists (or perhaps we should call them matter nihilists). Parsimony can be a two-edged sword.
" A. Qualia nihilism resolves an intractable problem for materialism: physical concepts are dispositional, whereas raw experiences concern properties that seem, instead, to pertain to noncausal essences. "
The epiphenomenality of qualia is not something that "seems" intuitively or introspectively, it is a delicately argued position
" Qualia nihilism offers a compelling diagnosis of where important skeptical arguments regarding the possibility of knowledge go wrong. The arguments—George Berkeley’s are their prototype—reason that sense data, being indubitable intuitions of direct experience, are the source of our knowledge, which must, in consequence, be about raw experience rather than the “external world.”"
One can challenge such arguments on the grounds that the "about" doens't follow.
"If you accept the existence of raw experience, the argument is notoriously difficult to undermine logically because concepts of “raw experience” truly can’t be analogized to any concepts applying to the external world. Eliminating raw experience provides an effective demolition; rather than the other way around, our belief in raw experience depends on our knowledge of the external world, which is the source of the concepts we apply to fabricate qualia."
I have a more modest proposal: let's eliminate the idea that some things X cannot represent, stand for, or inform us about, some thing Y without being similar or analogous to it.
"Against these considerations, the only argument for retaining raw experience in our ontology is the sheer strength of everyone’s belief in its existence."
Whereas the argument for matter is...?
Replies from: Eliezer_Yudkowsky, torekp, common_law↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-15T03:46:41.440Z · LW(p) · GW(p)
The simplest explanation for the universe is that it doesn't exist. It's not popular, because the universe seems to exist. Explanations need to be adeqaute to the facts, not just simple.
Upvoted for this line alone. See also, "If nothing exists, I want to know how the nothing works and why it seems to be so highly ordered."
Replies from: TheOtherDave, metaphysicist, Pentashagon↑ comment by TheOtherDave · 2012-09-15T04:31:42.154Z · LW(p) · GW(p)
See also Occam's sandblaster
↑ comment by metaphysicist · 2012-09-18T08:11:30.711Z · LW(p) · GW(p)
"If nothing exists, I want to know how the nothing works and why it seems to be so highly ordered."
If qualia are explained by our innate intuitions (or beliefs)—propositional attitudes—then two questions follow about "how it works":
What is the propositional content of the beliefs?
What evolutionary pressures caused their development?
I make some conjectures in another essay.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T20:43:55.547Z · LW(p) · GW(p)
Qualia might be beliefs instead of qualia. Matter might be qualia instead of matter.
Replies from: newname↑ comment by Pentashagon · 2012-09-18T18:03:23.941Z · LW(p) · GW(p)
Upvoted for this line alone. See also, "If nothing exists, I want to know how the nothing works and why it seems to be so highly ordered."
Or in other words "I think, therefore I want to explore."
↑ comment by torekp · 2012-09-15T02:46:23.553Z · LW(p) · GW(p)
Pain is a textbook example of a quale, and "pain" describes an effect, a reaction, not a cause, which would be something like "sharp" or "hot". Likewise, words for tastes barely map onto anything object[ive]. "Sweet" kind of means "high in calories", but kind of doens't, since saccharine is thousands of times sweeter than sugar, but not thousands of times more calorific. And so on.
And as I pointed out in the other thread, our experiences change in response to the relationship between viewer and object even as the object neither changes nor seems to change. We have the ability to be aware of internal states which are intimately involved in, but not informationally exhausted by, cognition of the external world. From a point of view valuing only knowledge of the external world as such, qualia are pure "noise".
But of course, it makes good evolutionary sense for us to be aware of some internal states. (And even if it didn't, evolution was never the perfect designer (witness flea wings and human appendix).) A cognitive system with a penchant for learning might easily take notice of its own internal workings during acts of perception. Such self-awareness might be extremely useful for a social animal. So you are quite wrong to assert, elsewhere in the thread, that subjective qualities would not be expected on the hypothesis of physicalism.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T20:23:08.699Z · LW(p) · GW(p)
I'm not sure how this is relevant. I was responding to the objection that qualia have no vocabulary of their own, but ony parasitize vocabulary relating to external properties.
But of course, it makes good evolutionary sense for us to be aware of some internal states
Sure, but that's introspection, not subjectivity.
So you are quite wrong to assert, elsewhere in the thread, that subjective qualities would not be expected on the hypothesis of physicalism.
I don't think so. bearing in mind that what I mean by "subjectivity" is "objective inaccessibility", not "introspectability". Permalink
Replies from: torekp↑ comment by torekp · 2012-09-20T23:08:36.570Z · LW(p) · GW(p)
but that's introspection, not subjectivity
I smell a false dichotomy.
bearing in mind that what I mean by "subjectivity" is "objective inaccessibility"
Just how inaccessible must something be, objectively, to count? Must it be logically impossible to access the state objectively, for example? Depending on how you cash this out, you may be in danger of using the word "subjectivity" idiosyncratically.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T01:03:37.015Z · LW(p) · GW(p)
Must it be logically impossible to access the state objectively, for example?
No. But introspectability if far too weak a standard. I can introspect thoughts that are possible to communicate objectively.
Replies from: torekp↑ comment by torekp · 2012-09-22T00:54:24.051Z · LW(p) · GW(p)
I have already listed another condition besides introspectability:
internal states which are intimately involved in, but not informationally exhausted by, cognition of the external world.
We could easily add conditions or clarifications. For example, let "external world" or "objective access" be specified as what other humans can detect with unaided senses.
↑ comment by common_law · 2012-09-18T08:48:50.264Z · LW(p) · GW(p)
The simplest explanation for the universe is that it doesn't exist. It's not popular, because the universe seems to exist. Explanations need to be adequate to the facts, not just simple... Since the inexpressibility of qualia can be accounted for given facts about the limited bandwidth of speech, it does not need to be accounted for all over again on the hypothesis that qualia don't exist.
But can the inexpressibility of qualia be accounted for by such facts as mentioned? That's the question, since the claim here is that the only supposed fact you have to support your belief that you experience qualia is your inability to doubt that you do. It's hard to see how that's a good reason.
Your claim to account for the ineffability of qualia based on expressive limitations is no different. No facts can tell you whether articulating qualia would exceed our expressive limitations because we have no measure of the expressive demands of a quale. The most you can say is that potential explanations might be available based on expressive limitations, despite our currently having no idea how to apply this concept to "experience."
Whereas the argument for matter is...?
Science. Human practice. Surely not "I just can't help believing that matter exists."
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T20:11:27.593Z · LW(p) · GW(p)
But can the inexpressibility of qualia be accounted for by such facts as mentioned?
It would be more intesting to put forward a specific objection.
the claim here is that the only supposed fact you have to support your belief that you experience qualia is your inability to doubt that you do. It's hard to see how that's a good reason.
I don't think that anything anywhere is better supported. Can you prove the existence of matter, or the falsity of contradictions without assuming them?
No facts can tell you whether articulating qualia would exceed our expressive limitations because we have no measure of the expressive demands of a quale.
What an odd thing to say. The argument for the inexpressability of qualia is just the persistent inability of anyone to do so -- like the argument against the existence of time machines. An explanation for that inablity is what I gave, just as their are speculative theories against time travel.
Science. Human practice. Surely not "I just can't help believing that matter exists."
I think that if you unpack "science" and "human practice" you will find elements of "we assume without proving"..and "we can't help but believe".
comment by The_Duck · 2012-09-14T05:12:39.543Z · LW(p) · GW(p)
I think simply fully accepting materialism clears up all hard philosophical problems related to consciousness, including "qualia." We can simply go and look at the how the brain works, physically. Once we understand all the physical facts (including e.g. the physical causes of people talking about qualia) there are no other facts to understand.
As such, I feel like someone treating "qualia" seriously is a big red (ha) flag. Either they have not embraced materialism, or they are worrying about whether a falling tree that no one hears makes a sound.
Replies from: Eliezer_Yudkowsky, Kaj_Sotala, Peterdjones, None, Bugmaster↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-15T03:43:13.593Z · LW(p) · GW(p)
Even if examining the brain will make you less confused someday, correctly believing that proposition does not make you any less confused right now.
Replies from: wedrifid↑ comment by wedrifid · 2012-09-15T03:48:44.615Z · LW(p) · GW(p)
Even if examining the brain will make you less confused someday, correctly believing that proposition does not make you any less confused right now.
Or, at least, it doesn't make you not-confused right now. Correctly propagating that belief eliminates the common class of confusion along the lines of "My brain is inherently incomprehensible, why can we comprehend other things but not the brain? Reductionism fails, we must invent new physics to account for mental experiences."
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-16T03:10:10.869Z · LW(p) · GW(p)
Granted.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2012-09-16T07:24:34.852Z · LW(p) · GW(p)
can you clarify what a crossed-out "Granted" means in this context?
Replies from: Viliam_Bur, Eliezer_Yudkowsky↑ comment by Viliam_Bur · 2012-09-16T10:22:34.331Z · LW(p) · GW(p)
Crossed out = retracted comment.
You do this by clicking a "Retract" icon below your comment. It means: just ignore this comment. It could mean that author does not agree with their previously made comment, or don't feel the comment is useful for discussion, or something else.
It is something like deleting the comment, except that it is not deleted technically. So you can for example look at the replies to this comment, and they still make sense.
Once retracted, the comment cannot be un-retracted.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-16T19:16:01.157Z · LW(p) · GW(p)
Made the comment, realized it didn't add anything.
↑ comment by Kaj_Sotala · 2012-09-14T10:34:53.023Z · LW(p) · GW(p)
That materialism will be capable of explaining qualia is an empirical hypothesis, which has not yet been shown true nor false. One can accept materialism while remaining agnostic about whether it can explain qualia, just like one can accept economics without necessarily requiring it to explain physics.
Replies from: None, J_Taylor, Douglas_Knight, common_law↑ comment by [deleted] · 2012-09-14T16:59:18.339Z · LW(p) · GW(p)
If there is a qualia thing that is in fact a thing in the world, then materialism (the study of things in the world) can explain it.
Maybe there is some barrier to actually figuring something out, like it's really hard and we die before we figure it out. Maybe that's what you meant? Or did you literally mean that it's possible in principle that materialism can't explain some phenomenon?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-09-14T20:59:15.125Z · LW(p) · GW(p)
Or did you literally mean that it's possible in principle that materialism can't explain some phenomenon?
This is what I meant.
I believe that materialism will eventually explain why beings would act just as if certain processes in their nervous system (or equivalent) produced qualia. I am agnostic about whether it will ever explain why those beings actually have qualia, and don't merely act like it.
Replies from: Vaniver↑ comment by Vaniver · 2012-09-14T21:14:39.890Z · LW(p) · GW(p)
I am agnostic about whether it will ever explain why those beings actually have qualia, and don't merely act like it.
I wouldn't call myself as "agnostic" on that- I would claim that it's an unquestion if it doesn't cash out as differing predictions in a materialistic interpretation. (This is sometimes what people mean by agnostic, but typically agnostic describes the "above my pay grade" response, not the "beneath my notice" response.)
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-09-18T09:56:04.502Z · LW(p) · GW(p)
It may be relevant for ethically important questions such as "how realistic a simulation of a suffering being can we make without actually causing any real suffering".
↑ comment by Douglas_Knight · 2012-09-18T02:51:32.864Z · LW(p) · GW(p)
What do you mean by "empirical"?
Given a putative explanation, how do you assess it?
It appears to me that you are merely saying that you do not accept the putative explanation that the Duck (among many others) accepts. Putting it in the impersonal language seems extremely misleading to me. Moreover, the existence of the disagreement appears strong evidence against the claim this is an empirical question, at least if "empirical" is interpreted in an impersonal way.
Maybe your point is your second sentence and your disagreement is a minor detail, but I find your phrasing emphasizes disagreement and distracts from the second sentence. Indeed, the second sentence seems to take a personal view of acceptance of arguments.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-09-18T10:04:00.515Z · LW(p) · GW(p)
The claim that "materialism will be capable of explaining qualia" is proven if materialism does indeed come up with a convincing explanation of qualia. And while one can't disprove it entirely, the claim becomes quite improbable if we ever reach a point in time where it looks like we've solved every other scientific mystery aside for the problem of qualia.
I have no idea of how I'd assess a proposed materialistic explanation of qualia, given that such an explanation seems to me impossible in principle. But then, just because I'm incapable of imagining such an explanation doesn't mean that it would actually be impossible to come up with one, so I remain open to the possibility of someone coming up with it regardless.
↑ comment by common_law · 2012-09-18T07:16:17.068Z · LW(p) · GW(p)
One can accept materialism while remaining agnostic about whether it can explain qualia, just like one can accept economics without necessarily requiring it to explain physics.
Materialism is a philosophy which claims the primacy of physics. A materialist can be either a reductionist or an eliminitivist about qualia.
The analogy to economics is bad because economics doesn't contend that economics is primary over physics, but materialism does contend that the physical is primary over the mental.
Replies from: Peterdjones, Kaj_Sotala↑ comment by Peterdjones · 2012-09-18T20:53:26.657Z · LW(p) · GW(p)
Materialism is a philosophy which claims the primacy of physics
I don't see why that shoudn't be called physcialism.
↑ comment by Kaj_Sotala · 2012-09-18T09:58:33.352Z · LW(p) · GW(p)
I suppose I'm using "materialism" in a slightly different way, then - to refer to a philosophy which claims that mental processes (but not necessarily qualia) are a subset of physical processes, and thus explainable by physics.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-09-18T10:29:50.684Z · LW(p) · GW(p)
I don't know what you mean by "mental". By what concept of "mental processes" are qualia not mental?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-09-18T15:58:37.444Z · LW(p) · GW(p)
I'm not even sure that I agree with this myself, and I realize that this is a bit of a circular definition, but let's try: mental processes are those which are actually physically occuring in the brain (while qualia seem to be something that's produced as a side-effect of the physical processes).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-09-18T16:18:31.377Z · LW(p) · GW(p)
mental processes are those which are actually physically occuring in the brain
That's like redefining "sensation" to mean "afferent neural signal", which is what necessitated inventing the word "qualia" to stand for what "sensation" used to mean. That one's a lost cause, but to use "mental process" to mean "the physical counterpart of what we used to call a mental process but we don't have a word for any more" is just throwing a crowbar into the discourse. Maybe we need a term for "the physical counterpart of a mental process" to distinguish them from other physical processes, but "mental process" can't be it.
↑ comment by Peterdjones · 2012-09-14T10:54:15.932Z · LW(p) · GW(p)
Once we understand all the physical facts (including e.g. the physical causes of people talking about qualia) there are no other facts to understand.
How do you know? If materialism is a scientific hypothesis, it is disproveable, ie it could run into a phenomenon it cannot explain. OTOH, if it is a case of dogmatically rejecting anythign that doens't fit a materialistic worldview, how is that rational?
Replies from: The_Duck, J_Taylor↑ comment by The_Duck · 2012-09-14T18:57:08.353Z · LW(p) · GW(p)
If materialism is a scientific hypothesis, it is disproveable, ie it could run into a phenomenon it cannot explain.
I could imagine such a thing happening. The fact that it hasn't happened is why we should be firm materialists. As it stands, we have every reason to expect that when we delve into the neurobiology of the brain, we will find a complete, material, physical explanation for the phenomenon of "people talking about qualia." Yes, there's "still a chance" that consciousness may turn out to somehow lie outside the realm of physics as we know it, but that doesn't license you to believe or expect it.
Replies from: Peterdjones, Bruno_Coelho↑ comment by Peterdjones · 2012-09-18T12:49:17.469Z · LW(p) · GW(p)
Materialism could be a well-confirmed hypothesis that we should accept fairly firmly, but that does't "clear up" any problems whatsoever. Believing, today, that the qualia will one day have a materialistic explanation does not tell us today what that explanation is.
Replies from: The_Duck↑ comment by The_Duck · 2012-09-18T20:48:23.904Z · LW(p) · GW(p)
Yes, I agree. I'm only claiming that materialists should classify the remaining hard work as neurobiology, not philosophy. On the philosophical side, we should realize that the answer to questions like "How do material brains give rise to immaterial qualia?" is "There are no immaterial things; investigate the brain more thoroughly and you will understand the basis of internal experience."
Replies from: Peterdjones, Peterdjones, bogus, Eugine_Nier↑ comment by Peterdjones · 2012-09-20T15:26:48.100Z · LW(p) · GW(p)
"How do material brains give rise to immaterial qualia?" is "There are no immaterial things;
It's not clear who is supposed to be posing that question. The Hard Problem is usually posed without prejudice to the materiality of qualia.
↑ comment by Peterdjones · 2012-09-18T21:01:08.031Z · LW(p) · GW(p)
That is an expecation about an answer, not an answer.
↑ comment by bogus · 2012-09-18T20:57:28.339Z · LW(p) · GW(p)
Yes, I agree. I'm only claiming that materialists should classify the remaining hard work as neurobiology, not philosophy.
This is not clear at all - even though I do otherwise agree with your physicalist premises - because the most detailed evidence about subjective experience has been collected by philosophers, namely phenomenologists. The "hard" work probably encompasses any of biology, physics and philosophy.
↑ comment by Eugine_Nier · 2012-09-19T03:03:17.554Z · LW(p) · GW(p)
Could you taboo "material"/"immaterial". In particular are, say, video game characters "material"?
↑ comment by Bruno_Coelho · 2012-09-16T21:12:59.040Z · LW(p) · GW(p)
Expecting the brain to be non-reducible makes you open to magic explanations.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T12:49:53.871Z · LW(p) · GW(p)
Expecting it to be reducible is not in itself an explanation.
↑ comment by J_Taylor · 2012-09-14T14:34:24.561Z · LW(p) · GW(p)
Materialism is neither a scientific hypothesis, nor a case of dogmatically rejecting anything that doesn't fit a materialistic worldview.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-14T14:51:53.598Z · LW(p) · GW(p)
So how is the lifting being done? By elimination, as per your other comment?
Replies from: J_Taylor, None↑ comment by J_Taylor · 2012-09-15T17:55:43.186Z · LW(p) · GW(p)
So how is the lifting being done?
Could you please rephrase this question?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T20:19:08.253Z · LW(p) · GW(p)
How does one solve problems by "adopting materialism"?
Replies from: J_Taylor↑ comment by [deleted] · 2012-09-14T16:55:23.004Z · LW(p) · GW(p)
Materialism is the useful tautology that everything that is woven into the Great Web of Causality falls under the category of "physics". And that by "physics" we mean "everything in the GWC".
Non-materialism is the non useful statement that some things exist and effect the GWC without being part of the GWC.
Replies from: Peterdjones, Eugine_Nier↑ comment by Peterdjones · 2012-09-14T18:55:43.464Z · LW(p) · GW(p)
I don't see the usefullness. There's a usefull distinction between, for instance,
"everything reduces to the behaviour of its smalles constituents"
and
"there are multiple independent layers, each with their own laws and causality".
I can also see the difference between
"Everything that effects is effected"
and
"There are uncasued causes and epiphenomenal danglers".
Replies from: None↑ comment by [deleted] · 2012-09-14T19:02:19.076Z · LW(p) · GW(p)
reductionism is orthogonal to materialism
uncaused causes are empirically verifiable (we have no clear examples)
Once you clear up all the crap around dangling epiphenomena with the GAZP, what's left has no use.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T19:44:13.131Z · LW(p) · GW(p)
reductionism is orthogonal to materialism
Maybe. But if you distinguish them, it turns out that the work is beign done by R-ism.
uncaused causes are empirically verifiable (we have no clear examples)
We have candidates, such as the big bang, and the possible disappearance of information in black holes.
Once you clear up all the crap around dangling epiphenomena with the GAZP, what's left has no use.
I'm still rather unpersuaded that you can solve problems by adopting beliefs. Sounds too much like faith to me.
Replies from: None↑ comment by [deleted] · 2012-09-20T19:55:44.187Z · LW(p) · GW(p)
I'm still rather unpersuaded that you can solve problems by adopting beliefs. Sounds too much like faith to me.
Likewise. I wonder what you are referring to?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T20:05:23.522Z · LW(p) · GW(p)
I wonder what you are referring to?
The_Duck wote:
I think simply fully accepting materialism clears up all hard philosophical problems related to consciousness, including "qualia."
I seem to have translated "accepting" into "adopting"
↑ comment by Eugine_Nier · 2012-09-16T22:50:50.037Z · LW(p) · GW(p)
Can you give a materialist account of this "Great Web of Causality"?
Replies from: None↑ comment by [deleted] · 2012-09-17T19:22:27.884Z · LW(p) · GW(p)
All the things that effect the other things.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-18T00:20:51.709Z · LW(p) · GW(p)
Ok, now taboo "effect".
Replies from: None↑ comment by [deleted] · 2012-09-18T01:42:53.340Z · LW(p) · GW(p)
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-19T03:20:30.894Z · LW(p) · GW(p)
So how would I use this description of "effect" to taboo the word in the following sentence?
The mass of an electron has an effect on the properties of hydrogen.
Or would you argue that the above sentence is incoherent.
Replies from: None, TheOtherDave↑ comment by [deleted] · 2012-09-20T20:30:16.589Z · LW(p) · GW(p)
It's not incoherent.
I don't know. I don't understand pearl's reduction of causality. I just know it's there.
Mathematical relations like "hydrogen properties are dependent of electron mass" might not fit the causality concept. Or maybe I just can't make the math jump.
Anyways, what are you gaining by these questions? Do you have some grand solution that you are making me jump thru hoops to find? Do you think I have some grand solution that you are jumping thru hoops to squeeze out of me?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-20T22:05:51.372Z · LW(p) · GW(p)
Anyways, what are you gaining by these questions? Do you have some grand solution that you are making me jump thru hoops to find? Do you think I have some grand solution that you are jumping thru hoops to squeeze out of me?
I'm trying to show you that materialism in the sense you seem to mean here is ultimately incoherent.
Replies from: None↑ comment by [deleted] · 2012-09-20T22:17:46.376Z · LW(p) · GW(p)
You'll have to explain your position. I can't see it. To clarify what I think, take "me" as a node, and recursively build a causality graph (Pearl's thing) of all the causes that lead into that node. By some theorem somewhere, that graph will be connected. Then label that graph "my map of the universe" and label it's compressing model "physics". That is what "materialism" means to me.
I've just realized, tho, that the rest of you might attach a different concept to "materialism", but I don't know what it is. Can you give me a steel-man (or a straw man (or a nonmaterial entity)) version of what "materialism" means to you?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-22T03:43:05.438Z · LW(p) · GW(p)
To clarify what I think, take "me" as a node, and recursively build a causality graph (Pearl's thing) of all the causes that lead into that node. By some theorem somewhere, that graph will be connected. Then label that graph "my map of the universe" and label it's compressing model "physics". That is what "materialism" means to me.
I think you are making a category error with respect to what Pearl's theory actually does.
Replies from: None↑ comment by [deleted] · 2012-09-23T00:42:55.880Z · LW(p) · GW(p)
care to expand? His bayesian networks stuff is for modelling causal relationships. Am I confused?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-24T22:54:58.337Z · LW(p) · GW(p)
This comment by Argency explains what I mean by causality being incompatible with pure materialism.
↑ comment by TheOtherDave · 2012-09-19T07:01:15.995Z · LW(p) · GW(p)
I suspect you mean "affects."
↑ comment by [deleted] · 2012-09-16T20:38:33.471Z · LW(p) · GW(p)
Once we understand all the physical facts (including e.g. the physical causes of people talking about qualia) there are no other facts to understand.
It's the last bit here that's controversial. Why are there no other facts to understand past the physical ones? What's the argument for that?
Here's what I mean: Say that whenever I see that something is red, a certain neural network is activated, call it the R-network. Once we discover that seeing red is, physically, the activation of the R-network, should we then say that there are two facts ('I saw a red thing' and 'My R-network was activated') or only one fact ('My R-network was activated')? We might readily admit that seeing red is reducible to the activation of the R-network, but that alone doesn't mean that the fact 'I saw a red thing' is not a fact.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-16T23:09:47.370Z · LW(p) · GW(p)
Every time I see a red thing, I see a thing. So, are "I see a red thing" and "I see a thing" two separate facts? If so, then I cannot imagine what value there is in counting facts. On that account simply listing all the facts that derive from a given observation ("I see a thing that isn't blue" "I see a thing that isn't yellow" "I see a thing that isn't orange" etc. etc. etc.) would take a lifetime. It might be useful to Taboo "fact".
Replies from: None↑ comment by [deleted] · 2012-09-16T23:37:15.792Z · LW(p) · GW(p)
So, are "I see a red thing" and "I see a thing" two separate facts?
I think they'd have to be, since they're not mutually entailing. They certainly can't be identical facts.
If so, then I cannot imagine what value there is in counting facts.
Safe to say, there is an uncountable infinity of facts, whether or not we restrict ourselves to physical facts. The question is whether or not there are non-physical facts (where an experience of a red thing is taken to be a non-physical fact). So this isn't a question of quantity or counting.
It might be useful to Taboo "fact".
It might. What do you suggest?
Replies from: TheOtherDave, Peterdjones↑ comment by TheOtherDave · 2012-09-17T00:43:04.453Z · LW(p) · GW(p)
The question is whether or not there are non-physical facts (where an experience of a red thing is taken to be a non-physical fact).
Well, if an experience of a red thing is taken to be a non-physical fact, then there are certainly non-physical facts, inasmuch as there are experiences of red things.
What do you suggest?
I don't know, since I'm not really sure what you have in mind when you say "nonphysical fact," beyond knowing that experiencing red is an example. That's why I suggested it.
Replies from: None↑ comment by [deleted] · 2012-09-17T01:26:30.981Z · LW(p) · GW(p)
Well, if an experience of a red thing is taken to be a non-physical fact, then there are certainly non-physical facts, inasmuch as there are experiences of red things.
Agreed. I think it's illegitimate to suggest that the problem of qualia can be dismissed by associating experiential facts with physical facts, and then revoking the fact-license of the experiential one. This isn't to say that I think the problem of qualia is an unsolved one. It just can't be solved (or disolved or whatever) like that.
I don't know, since I'm not really sure what you have in mind when you say "nonphysical fact," beyond knowing that experiencing red is an example.
I was using the term 'fact' as I understood Duck to be using it. I guess I'd say a fact is something that's true. (Though we use the term ambiguously, sometimes meaning 'the state of affairs about which a true thing is said' or something like that) A physical fact is something thats true and that's about nature. An astrological fact is something that's true and that's about astrological stuff (and from this we get the conclusion that there are no positive astrological facts).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-17T01:52:03.457Z · LW(p) · GW(p)
Well, I certainly agree that all of this semantic pettifoggery gets us no closer to understanding what distinguishes systems capable of having experiences from those that aren't, or how to identify a real experience that we ourselves aren't having, or how to construct systems capable of having experiences, or how to ensure that systems we construct won't have experiences.
↑ comment by Peterdjones · 2012-09-18T20:52:11.904Z · LW(p) · GW(p)
Safe to say, there is an uncountable infinity of facts, whether or not we restrict ourselves to physical facts.
Well, there's a infinity of true statements. Some folks like to restict "fact" to what is not Cambridge
Replies from: None↑ comment by [deleted] · 2012-09-18T23:41:38.084Z · LW(p) · GW(p)
That wouldn't matter to the number of facts though. Anything, for example, which weighs 1 lb weighs more than .9 lb. And there are uncountably many weights between .9 and 1 lb that this thing is heavier than. All those are facts by anyone's measure.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T16:24:19.312Z · LW(p) · GW(p)
Not by anyone's measure. There are those who would say there is one basica fact, wich has to be derived emprically, and a host of logically derivable true statements.
↑ comment by Bugmaster · 2012-09-14T06:17:03.768Z · LW(p) · GW(p)
Agreed; furthermore, from the point of view of materialism, several of the "hard problems" related to qualia simply go away. For example, the question "why do you see the same red as I do when looking at this red text ?" is easily answered with "mu", because there's no Platonic ideal of redness, and thus no "same" red.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-14T11:32:08.206Z · LW(p) · GW(p)
I can't make sense of that. For one thing, materialism doens't imply nominalism. For a materialist, there could be a Form of the Electron. For another, there is still some phenomenon of sameness in a material world: all electrons are identical
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-14T18:59:36.246Z · LW(p) · GW(p)
For a materialist, there could be a Form of the Electron.
I'm not sure what this means; can you expand on it a bit ?
What I meant to say was that "I see the color red" is, in materialist-speak (or at least my personal understanding on it), a shorthand for something like this (warning, I'm not a neuroscientist, so I'm probably wrong):
"This screeen emits photons within a narrow frequency range. These photons then excite the photoreceptors in my eyes, which cause certain electrochemical changes to occur in my brain. These changes propagate and cause my mental model of the world (which in itself is a shorthand for a wide set of brain states) to update in a specific way.
Since all human brains are very similar to each other, due to evolutionary as well as environmental factors, it is very likely that your own brain states will undergo similar changes when these photons excite your own receptors. That is to say, we could create a probabilistic mapping between my brain states and yours, and predict the future state of your brain (due to those photons hitting your eyes) based on mine, with high degree of certainty.
However, since no two brains (or two sets of eyes, even) are identical, the exact changes in your own brain states will be different from mine, and the aforementioned mapping cannot be exact."
In other words, there's no such thing as a "perfect red", since everyone's brains are different. In fact, there's some evidence to suggest that color perception is strongly shaped by language and culture, etc.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T14:41:29.506Z · LW(p) · GW(p)
Given the tenor of your further comments, I miunderstood you. You are claiming that given materialism, qualia probably vary with slight variations in brain structure. Although the conclusion really follows from something like a supervenience principle, not just from the materiality of all things. And although qualia only probabl& vary. There could still be a "same" red. An althouh we don't have a theory of how qualia depen on brain states -- which is, in fact, the* Hard Problem. And the Hard Problem remains unaddressed by an assumption of materialism, so materialism does not clear up "all hard problems".
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-18T16:27:32.435Z · LW(p) · GW(p)
An althouh we don't have a theory of *how qualia depen on brain states -- which is, in fact, the Hard Problem
In my response, I was trying to say that "qualia" are brain states. I put the word "qualia" in quotes because, as far as I understand, this word implies something like, "a property or entity that all beings who see this particular shade of red share", but I explicitly deny that such a thing exists.
Everyone's brains are different, and not everyone experiences the same red, or does so in the same way. The fact that our experiences of "red" are similar enough to the point where we can discuss them is an artifact of our shared biology, as well as the fact that we were all brought up in the same environment.
Anyway, if "qualia" are brain states, then the question "how do qualia depend on brain states" is trivially answered.
Replies from: Peterdjones, bogus↑ comment by Peterdjones · 2012-09-18T19:56:01.644Z · LW(p) · GW(p)
In my response, I was trying to say that "qualia" are brain states
My use of "depend" was not meant to exlude identity. I had in mind the supervenience principle, which is trivially fulfilled by identity.
"a property or entity that all beings who see this particular shade of red share"
I am not sure where you got that from. C I Lewis defined qualia as a "sort of universal", but I don't think there was any implication that everyone sees 600nm radiation identicallty. OTOH, ones personal qualia must recur to a good degree of accuracy or one would be able to make no sense of ones sensory input.
Anyway, if "qualia" are brain states, then the question "how do qualia depend on brain states" is trivially answered.
Interestingly, that is completely false. Knowing that a bat-on-LSD's qualia are identical to its brain states tells me nohting about what they are (which is to say what they seem like to the bat in question..which is to say what they are, since qualia are by definition seemings.[If you think there are two or three meanings of "are" going on there, you might be right]).
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-19T01:04:33.657Z · LW(p) · GW(p)
OTOH, ones personal qualia must recur to a good degree of accuracy or one would be able to make no sense of ones sensory input.
Agreed. I was just making sure that we aren't talking about some sort of Platonic-realm qualia, or mysterious quantum-entanglement qualia, etc. That's why I personally dislike the word "qualia"; it's too overloaded.
Knowing that a bat-on-LSD's qualia are identical to its brain states tells me nohting about what they are (which is to say what they seem like to the bat in question..
If I am correct, then you personally could never know exactly what another being experiences when it looks at the same red object that you're looking at. You may only emulate this knowledge approximately, by looking at how its brain states correlate with yours. Since another human's brain states are pretty similar to yours, your emulation will be fairly accurate. A bat's brain is quite different from yours, and thus your emulation will not be nearly as accurate.
However, this is not the same thing as saying, "bats don't experience the color red (*)". They just experience it differently from humans. I don't see this as a problem that needs solving, though I could be missing something.
(*) Assuming that bats have color receptors in their eyes; I forgot whether they do or not.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T14:20:30.211Z · LW(p) · GW(p)
Agreed. I was just making sure that we aren't talking about some sort of Platonic-realm qualia,
I don't think anyone has raised that except you.
If I am correct, then you personally could never know exactly what another being experiences when it looks at the same red object that you're looking at.
Alhough, under may circumstances, I could know approximately.
However, this is not the same thing as saying, "bats don't experience the color red".
Bats have a sense that humans don't have, sonar, and if they have qualia, they presumably have some kind of radically unfamiliar-to-humans qualia to go with it. That is an issue of a different order to not knowing exactly what someone else's Red is like. And, again, it is not a problem solved by positing the identity of the the bat's brain state and its qualia. Identity theory does't explain qualia in the sense of explaining how variations in qualia relate to varations in brain state.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-20T18:42:00.835Z · LW(p) · GW(p)
Alhough, under may circumstances, I could know approximately.
Agreed.
Bats have a sense that humans don't have, sonar, and if they have qualia, they presumably have some kind of radically unfamiliar-to-humans qualia to go with it.
I wasn't talking about sonar, but about good old-fashioned color perception. A bat's brain is very different from a human's. Thus, while you can approximate another human's perception fairly well, your approximation of a bat's perception would be quite inexact.
Identity theory does't explain qualia in the sense of explaining how variations in qualia relate to varations in brain state.
I'm not sure I understand what you mean. If we could scan a bat's brain, and understand more or less how it worked (which, today, we can't do), then we could trace the changes in its states that would propagate throughout the bat when red photons hit its eyes. We could say, "aha, at this point, the bat will likely experience something vaguely similar to what we do, when red photons hit our eyes". And we could predict the changes in the bat's model of the world that will occur as the result. For example, if the bat is conditioned to fear the color red for some reason, we could say, "the bat will identify this area of its environment as dangerous, and will seek to avoid it", etc.
If the above is true, then what is there left to explain ?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T19:17:06.041Z · LW(p) · GW(p)
If the above is true, then what is there left to explain ?
Radically unfamiliar-to-humans qualia. You have picked an easy case, I have picked a difficult one. If we wan't to know what the world sonars like to a bat on LSD, identity theory doens't tell us.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-20T19:32:11.336Z · LW(p) · GW(p)
You have picked an easy case, I have picked a difficult one. If we wan't to know what the world sonars like to a bat on LSD, identity theory doens't tell us.
Well, in point of fact, I've personally never done LSD, so I don't know what color perception is like for another human on LSD, either. I could make an educated guess, though.
In case of the bat sonar, the answer is even simpler, IMO: we lack the capacity to experience what the world sonars like to a bat, except in the vaguest terms. Again, I don't see this is a problem. Bats have sonars, we don't.
Note that this is very different from saying something like "we can't know whether bats experience anything at all through their sonar", or "even if we have scanned the bat's brain, we can't predict what changes it would undergo in response to a particular sonar signal", etc. All I'm saying is, "we cannot create a sufficiently accurate mapping between our brain states and the bat's, as far as sonaring is concerned".
Again, I'm not entirely sure I understand what additional things we need to explain w.r.t qualia.
Replies from: gwern, Peterdjones↑ comment by gwern · 2012-09-21T02:40:05.984Z · LW(p) · GW(p)
Well, in point of fact, I've personally never done LSD, so I don't know what color perception is like for another human on LSD, either. I could make an educated guess, though.
Normally I'd assume that I know what you meant and move on, but since this involves LSD... You don't know what it's like? Or you do, but it's an educated guess? What?
Replies from: Bugmaster↑ comment by Peterdjones · 2012-09-20T20:17:33.530Z · LW(p) · GW(p)
In case of the bat sonar, the answer is even simpler, IMO: we lack the capacity to experience what the world sonars like to a bat, except in the vaguest terms. Again, I don't see this is a problem
I see that as a problem for the claim that mind-brain identity theory explains qualia. It does not enable us to undestand the bat's qualia, or to predict what they would be like. However, other explanations do lead to understanding and predicting.
Again, I'm not entirely sure I understand what additional things we need to explain w.r.t qualia.
Understanding and predicting.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-20T20:36:56.404Z · LW(p) · GW(p)
I guess I'm not entirely sure what you mean by "understanding" and "predicting". As I said, if we could scan the bat's brain and figure out how all of its subsystems influence each other, we would know with a very high degree of certainty what happens to it when the bat receives a sonar signal. We could identify the changes in the bat's model of the world that would result from the sonar signal, and we could predict them ahead of time.
Thus, for example, we could say, "if the bat is in mid-flight, and hungry, and detects its sonar reflecting from a small object A of size B and shape C etc., then it would alter its model of the world to include a probable moth at the object's approximate location (*). It would then alter course to intercept the moth, by sending out signals to its wing muscles as follows: blah blah".
Are predictions of this sort insufficient ? If so, what additional predictions could be made by those other explanations you mentioned ?
(*) Disclaimer: I don't really know much about the hunting habits of real-life bats.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T00:02:03.131Z · LW(p) · GW(p)
Are predictions of this sort insufficient ?
More irrelevant. None of them are actualy about qualia, about how things seem to experiencing subjects. You have Substituted an Easier Problem.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-21T00:10:50.268Z · LW(p) · GW(p)
Is "how things seem to experiencing subjects" somehow different from "things happening to the brains of experiencing subjects" ? If so, how ?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T01:00:25.812Z · LW(p) · GW(p)
We can't figure out the former from the latter. If we want to know what such-and-such and experience is like, a description of a brain state won't tell us. They might still be identical in some way we can;t understand... but then we can't undestand it. So it remains the case that m/b identity theory doesn't constitute an explanation.
Replies from: bogus, Bugmaster↑ comment by bogus · 2012-09-21T01:59:39.320Z · LW(p) · GW(p)
The map is not the territory. Just because descriptions of our brain states won't help us figure out what subjective experiences are like (either currently or in the foreseeable future), doesn't mean that those experiences aren't a part of the physical world somehow. Reductionism has been a very successful paradigm in our description of the physical world, but we can't state with any confidence that it has captured what the ontologically basic, "ground" level of physics is really like.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T11:08:30.160Z · LW(p) · GW(p)
The map is not the territory. Just because descriptions of our brain states won't help us figure out what subjective experiences are like (either currently or in the foreseeable future), doesn't mean that those experiences aren't a part of the physical world somehow
OK. I am not arguing for duaism. I am arguing against the claim tha adopting reductionism, or materialism, or m/b identity constitutes a resolution of any of any Hard Problem. What you are saying is that m/b identity might be true as unintelligible brute fact. What I am saying is that brute facts aren't explanations.
↑ comment by Bugmaster · 2012-09-21T08:47:38.053Z · LW(p) · GW(p)
If we want to know what such-and-such and experience is like, a description of a brain state won't tell us.
I read this sentence as,
"If we want to build an approximate mapping between someone else's brain states and ours, a description of a brain state won't help us".
That sounds contradictory to me.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T11:02:30.838Z · LW(p) · GW(p)
is you parpahrase actually a fair translation of my comment? Are "mappings" things that tell people what such-and-such an experience is like, as if they had had it themselves? What, concretely, is a mapping?
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-23T20:19:20.617Z · LW(p) · GW(p)
Our goal is to estimate what someone else will experience, "from the inside", in response to some stimulus -- given that we know what we'd experience in response to that stimulus. One way to do it is observe our own brains in action, and compare them to the other brain under similar conditions. This way, we can directly relate specific functions of our brain to the target brain. To use a rather crude and totally inadequate example, we could say,
"Every time I feel afraid, area X of my brain lights up. And every time this bat acts in a way that's consistent with being afraid, area Y of its brain lights up. Given this, plus what we know about biology/evolution/etc., I can say that Y performs the same function as X, with 95% confidence."
That's a rather crude example because brains can't be always subdivided into neat parts like that, and because we don't know a lot about how they work, etc. etc. Still, if we could relate the functioning of one brain to another under a variety of circumstances with some degree of certainty, we'd have a "mapping".
When you say, "I think if another human saw this piece of paper, he'd believe it was red", you're referencing the "mapping" that you made between your brain and the other human's. Sure, you probably created this mapping based on instinct or intuition, rather than based on some sort of scientific analysis, but it still works; in fact, it works so well you don't even need to think about it.
In the case of bat sonar, we'd have to analytically match up as many of our mental functions to the bat's, and then infer where the sonar would fit in -- since we humans don't have one of those. Thus, while we could make an educated guess, our degree of confidence in it would be low.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T07:21:23.233Z · LW(p) · GW(p)
OK. The cases where confidence is low are the cases where a dexcription of a brain state won't help.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-24T07:31:51.582Z · LW(p) · GW(p)
Agreed; but then, what is your goal ? If you are trying to answer the question, "how would it feel to have sonar", one possible answer is, "you can't experience it directly, but you'd be able to sort of see intermittently in the dark, except with your ears instead of eyes; here's a detailed probabilistic model". Is that not enough ? If not, what else are you looking for, and why do you believe that it's achievable at all ?
Replies from: Nornagest, Peterdjones↑ comment by Nornagest · 2012-09-24T09:57:28.567Z · LW(p) · GW(p)
Some humans do seem to have managed to experience echolocation, and you could presumably ask them about it -- not that that's terribly relevant to the broader question of experience.
↑ comment by Peterdjones · 2012-09-24T07:35:30.392Z · LW(p) · GW(p)
If reductionism is true, I would expect a reductive explanation, and I'm not getting one.
Replies from: Vladimir_Nesov, Bugmaster↑ comment by Vladimir_Nesov · 2012-09-24T08:52:09.716Z · LW(p) · GW(p)
Discussing whether "reductionism is true" or what is a "reductionistic explanation" feels to me like discussing whether "French cuisine is true", it's not apparent what particular query or method of explanation you are talking about. I think it's best to taboo "reductionism" in discussions such as this one.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T09:00:39.107Z · LW(p) · GW(p)
Don't tell me, tell EY..while I'm at a safe distance, please.
↑ comment by Bugmaster · 2012-09-24T07:43:53.367Z · LW(p) · GW(p)
I'm still not seeing what it is that you're trying to explain. I think you are confusing the two statements: a). "bats experience sonar", and b). "we can experience sonar vicariously through bats, somehow".
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T08:18:18.426Z · LW(p) · GW(p)
I'm not claiming to be able to explain anything. Some people have claimed that accepting materialism, or reductioinism, or something, solves the hard problem. I am pointing out that it doens't. The HP is the problem of explaining how experiential states relate in a detailed way to brain states, and materialists are no clearer about that than anyone else.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-24T11:03:31.919Z · LW(p) · GW(p)
I suppose I'm as confused as the average materialist, because I don't see what the "hard problem" even is. As far as I understand, materialism explains it away.
To put it another way, I don't think the fact that we can't directly experience what it's like to be a bat is a philosophical problem that needs solving. I agree that "how experiential states relate in a detailed way to brain states" is a question worth asking, but so are many other questions, such as "how does genetic code relate in a detail way to expressed phenotypes". People are working on it, though -- just check out Nornagest's link on this thread.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T13:36:59.905Z · LW(p) · GW(p)
To put it another way, I don't think the fact that we can't directly experience what it's like to be a bat is a philosophical problem that needs solving.
Philosophers don't suppose that either.
"The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colors and tastes."--WP
People are working on it, though
Maybe but you have clearly expressed why it is difficult: you can't predict novel qualia, or check your predictions. If you can't state quala verbally (mathematically, etc), then it is hard to see how you could have an explanation of qualia.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-24T17:11:00.669Z · LW(p) · GW(p)
you can't predict novel qualia
How novel are we talking ? If I have a functional model of the brain (which we currently do not, just as we don't have a model of the entire proteome), I can predict how people and other beings will feel in response to stimuli similar to the ones they'd been exposed to in the past. I can check these predictions by asking them how they feel on one hand, and scanning their brains on the other.
I can also issue such predictions for new stimuli, of course; in fact, artists and advertisers implicitly do this every day. As for things like, "what would it feel like to have sonar", I could issue predictions as well, though they'd be less certain.
If you can't state quala verbally (mathematically, etc)...
I thought we were stating them verbally already, f.ex. "this font is red". As for "mathematically", there are all kinds of MRI studies, psychological studies, etc. out there, that are making a good attempt at it.
Thus, I'm still not sure what remains to be explained in principle. I get the feeling that maybe you're looking for some sort of "theory of qualia" that is independent of brains, or possibly one that's only dependent on sensory mechanisms and nothing else. I don't think it makes sense to request such a theory, however; it'd be like asking for a "theory of falling" that excludes gravity.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T17:40:09.191Z · LW(p) · GW(p)
in response to stimuli similar to the ones they'd been exposed to in the past.
They wouldn't be novel. I don't mean further instances of the same kind.
I can also issue such predictions for new stimuli, of course; in fact, artists and advertisers implicitly do this every day.
Do they? Surely they make arrangements of existing qualia types.
I thought we were stating them verbally already, f.ex. "this font is red".
That's no good for novel qualia.
Thus, I'm still not sure what remains to be explained in principle.
- why there is phenomenal experience at all
- why we see colours and smell smells--how and why quaia match up to sensory modalities.
- anything to do with quala we don't have
I get the feeling that maybe you're looking for some sort of "theory of qualia" that is independent of brains,
Nope.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-24T18:05:28.776Z · LW(p) · GW(p)
They wouldn't be novel. I don't mean further instances of the same kind.
What do you mean, then ? I'm still rather confused. Sure, it's interesting to imagine what it'd feel like to have bat sonar (although some people apparently don't have to imagine), but, well, we don't have a sonar at the moment. Once we do, we can start talking about its qualia, and see if our predictions were right.
why there is phenomenal experience at all
That's kind of a broad question. Why do we have eyes at all ? The answer takes a few billion years...
why we see colours and smell smells--how and why quaia match up to sensory modalities.
Again, to me this sounds like, "why do our brain states change in response to stimuli received by our sensory organs (which are plugged into the brains); how and why do brain states match up to brain states". Perhaps you mean something special by "sensory modalities" ?
anything to do with quala we don't have
See above.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T18:43:59.117Z · LW(p) · GW(p)
I mean something like the standard meaning of " novel prediction". Like black holes are a novel prediction of GR
Sure "why is there experience at all" a broad question. Particularly since you wouldn't expect to find irreducible subjectivity in a physical universe. And its another question that isn't adressed by Accpeting Materialism.
how and why do brain states match up to brain states"
Yes, but you can't make that work in practice. You can;t describe a quale by describig the related brain state. For us, given our igonrance, brains states and qualia are informationally and semantically independent, even if they are ontologically the same thing. WHich is anothe way of saying that identity theory doens't explain much..
Perhaps you mean something special by "sensory modalities" ?
I mean sight is one modality hearing another.
Replies from: None, Bugmaster↑ comment by [deleted] · 2012-09-24T20:14:00.057Z · LW(p) · GW(p)
Particularly since you wouldn't expect to find irreducible subjectivity in a physical universe.
People keep asserting that and it's not obvious. Why would you not expect a being in a "physical" (Q1. what does this mean?) universe, to have "subjective experience" (Q2. what does that mean?)? (Q3 is the question itself)
Please respond
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T20:19:47.497Z · LW(p) · GW(p)
If "physcical" is cashed out as "understandable by the methods of the physcal sciences", then it follows that "everything is physical" means "everything is understandable from an extenal, objective perspective". If that is the case, the only kind of subjectivity that could exist is a kind that can be reduced to physics, a kind whch is ultimately objective, in the way that the "mental", for physicalists, is a subset of the physical.
Replies from: None↑ comment by [deleted] · 2012-09-24T20:36:32.720Z · LW(p) · GW(p)
everything is understandable from an extenal, objective perspective
Ok.
What does such a statement predict wrt subjective experience?
please respond
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T23:28:57.793Z · LW(p) · GW(p)
I have said it predicts that there is no irreducible subjective experience.
Replies from: None↑ comment by [deleted] · 2012-09-27T14:57:06.065Z · LW(p) · GW(p)
That "irreducible" part is bothering me. What does it mean? I can see that it could take us out of what "materialism" would predict, but I can't see it doing that without also taking us out of the set of phenomena we actually observe. (the meanings of irreducible that materialism prohibits are also not actually observed, AFAICT).
Anyways, getting downvoted, going to tap out now, I've made my case with the program and whatnot, no one wants to read the rest of this. Apologies for the bandwidth and time.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-27T18:54:40.357Z · LW(p) · GW(p)
Irreducile as in reducible as in reductionism. How can you spend any time on LW and not know what reductionism is? Reducibility is not observed except the form of explanations pubished in journals and gi vn in classrooms. Irreducibility is likewise not observed.
↑ comment by Bugmaster · 2012-09-24T19:58:26.518Z · LW(p) · GW(p)
I mean something like the standard meaning of " novel prediction". Like black holes are a novel prediction of GR
I don't know enough neurobiology to offer up any novel predictions off the top of my head; here are some random links off of Google that look somewhat interesting (disclaimer: I haven't read them yet). In general, though, the reduction of qualia directly to brain states has already yielded some useful applications in the fields of color theory (apparently, color perception is affected by culture, f.ex. Russians can discern more colors than Americans), audio compression (f.ex. ye olde MP3), and synthetic senses (people embedding magnets under their skin to sense magnetic fields).
And its another question that isn't adressed by Accpeting Materialism.
Why not ? I do not believe that subjectivity is "irreducible".
For us, given our igonrance, brains states and qualia are informationally and semantically independent, even if they are ontologically the same thing.
I'm not sure what this means. I mean, yes, given our ignorance, the Moon is a small, dim light source high up in the sky; but today we know better.
I mean sight is one modality hearing another.
How is this different from saying, "sight and sound are captured by different organs and processed by different sub-structures in the brain, thus leading to distinct experiences" ?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-25T00:19:36.562Z · LW(p) · GW(p)
Bear in mind that what is important here is the prediction of experience.
Believeing in materialism does not reduce subjectviity, and neither does believing in the reducibility of subjectivity.
but today we know better.
Yep. Explanation first, then identitfication.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-25T00:50:50.404Z · LW(p) · GW(p)
Believeing in materialism does not reduce subjectviity, and neither does believing in the reducibility of subjectivity.
I have no idea what this means. Believing or disbelieving in things generally doesn't poof them in or out of existence, but seeing as neither of us here are omniscient, I'm not sure why you'd bring it up.
Do you believe that subjective experiences are "irreducible" ? If so, you are making a very strong existential claim, and you need to provide more evidence than you've done so far.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-25T10:50:59.105Z · LW(p) · GW(p)
People keep telling me that Accpeting Materialism is The Answer. You don't beleive that, don't. But people keep tellig me.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-25T16:11:38.234Z · LW(p) · GW(p)
That kind of depends on what the question is, and you still haven't told me. If the question is, "who makes the most delicious cupcakes", then Materialism is probably not the answer. If the question is, "how do you account for the irreducibility of subjective experience", then Materialism is not the answer either, since you have not convinced me that subjective experience is irreducible, and thus the answer is "mu".
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-25T17:26:32.776Z · LW(p) · GW(p)
I haven't told you because they haven't told me. Which is not surprising, since thinking about what the questions are tends to reveal that materaiism doens't answer most of them.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-09-25T18:23:21.592Z · LW(p) · GW(p)
Ok, so there are some questions that materialism doesn't answer, but you don't know what those questions are, or why it doesn't answer them ? Why are we still talking about this, then ?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-26T08:52:07.853Z · LW(p) · GW(p)
I know what the questions materialism doesn't answer are. I've mentioned them repeatedly. I don't know what the questions materialism does answer are, ebcause the true Believers wont say.
↑ comment by bogus · 2012-09-18T16:55:45.096Z · LW(p) · GW(p)
Anyway, if "qualia" are brain states, then the question "how do qualia depend on brain states" is trivially answered.
It still makes sense to ask what these "brain states" actually are, physically. Since we seem to have direct experiential access to them as part of our subjective phenomenology, this suggests on Occamian grounds that they should not be as physically or ontologically complex as neurophysical brain states. The alternative would be for biological brains to be mysteriously endowed with ontologically basic properties (as if they had tiny XML tags attached to them) which makes no sense at all.
Replies from: TheOtherDave, Bugmaster↑ comment by TheOtherDave · 2012-09-18T17:18:58.618Z · LW(p) · GW(p)
It still makes sense to ask what these "brain states" actually are, physically
I would agree that it makes sense to ask what sorts of brain states are associated with what sorts of subjective experiences, and how changes in brain states cause and are caused by those experiences, and what sorts of physical structures are capable of entering into those states and what the mechanism is whereby they do so. Indeed, a lot of genuinely exciting work is being done in these areas by neurologists, neurobiologists, and similar specialists as we speak.
Replies from: bogus↑ comment by bogus · 2012-09-18T17:31:48.450Z · LW(p) · GW(p)
Indeed, a lot of genuinely exciting work is being done in these areas by neurologists, neurobiologists, and similar specialists as we speak.
I agree, and I would add that a lot of interesting work has also been done by transcendental phenomenologists - the folks who study the subjective experience phenomenon from its, well, "subjective" side. The open question is whether these two strands of work will be able to meet in the middle and come up with a mutually consistent account.
Replies from: shminux, TheOtherDave↑ comment by Shmi (shminux) · 2012-09-18T17:48:33.053Z · LW(p) · GW(p)
"transcendental phenomenology" is not a natural science but philosophy, so there is no middle to meet in.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T20:03:22.393Z · LW(p) · GW(p)
Except that there is, since there are plenty of subjects which have been studied from both sides. The natures of space, time and causality for a start.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-18T21:12:22.097Z · LW(p) · GW(p)
The natures of space, time and causality for a start.
Having studied these subjects from the physics side, I find that there is little useful input into the matter from the philosophy types, except for some vague motivations.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T15:18:08.226Z · LW(p) · GW(p)
You may not like the Middle, but it is there.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-20T16:27:11.039Z · LW(p) · GW(p)
Feel free to give an example.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T17:05:40.390Z · LW(p) · GW(p)
The natures of space, time and causality for a start.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-20T17:15:06.679Z · LW(p) · GW(p)
Something concrete, please. What is this nature? What is the philosophical position and what is the physical position? Where is that middle?
The standard example is Einstein's invocation of the Mach's principle, which is actually a bad example. GR shows that, contrary to Mach, acceleration is absolute, not relative. One can potentially argue that the frame dragging effect is sort of in this vein, but this effect is weak and was discovered after GR was already constructed, and not by Einstein.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T17:21:32.998Z · LW(p) · GW(p)
It's not a question of positions. The point is both philosophy and science study these questions.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-20T17:48:05.001Z · LW(p) · GW(p)
You claimed that there is a middle. Point one out, concretely.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T18:08:29.763Z · LW(p) · GW(p)
http://en.wikipedia.org/wiki/Leibniz%E2%80%93Clarke_correspondence. The point is both philosophy and science study these questions.
↑ comment by TheOtherDave · 2012-09-18T17:38:02.877Z · LW(p) · GW(p)
You say "has been done"... is that to suggest that there is no active work currently being done in transcendental phenomenology?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-09-18T22:37:24.364Z · LW(p) · GW(p)
If I can jump in... It's useful to distinguish between phenomenology in general, as the study of consciousness from "within" consciousness; various schools of phenomenological thought, distinguished by their methods and conclusions; and then all those attempts to explain the relationship between consciousness and the material world. These days the word "phenomenology" is used quite frequently in the latter context, and often just to designate what it is that one is trying to "correlate" with the neurons.
It's part of the general pattern of usage whereby an "-ology" comes to designate its subject matter, so that "biology" means life and not the study of life - "we share the same biology" doesn't mean our biology classes are in agreement - "psychology" means mind and not the study of mind, and "sociology" means social processes and not the study of them. That's an odd little trend and I don't know what to make of it, but in any case, "phenomenology" is often used as a synonym for the phenomena of consciousness, rather than to refer to the study of those phenomena or to a genuine theory of subjectivity.
Thus people talk about "naturalizing phenomenology", but they don't mean taking a specific theory of subjective consciousness and embedding it within natural science, they just mean embedding consciousness within natural science. Consciousness is treated in a very imprecise way, compared to e.g. neuroscience. Such precision as exists is usually in the domain of philosophical definition of concepts. But you don't see people talking about methods for precise introspection or for precise description of a state of consciousness, or methods for precise arbitration of epistemological disputes about consciousness.
Phenomenology as a discipline includes such methodological issues. But this is a discipline which exists more as an unknown ideal and as an object of historical study. Today we have some analytic precision in the definition of phenomenological concepts, and total imprecision in all other aspects, and even a lack of awareness that precision might be possible or desirable in those other aspects.
Historically, phenomenology is identified with a particular movement within philosophy, one which attached especial significance to consciousness as a starting point of knowledge and as an object of study. It could be argued that this is another sign of intellectual underdevelopment, in the discipline of philosophy as a whole - that phenomenology is regarded as a school of thought, rather than as a specific branch of philosophy like epistemology or ethics. It's as if people spoke about "the biological school of scientific thought", to refer to an obscure movement of scientists who stood out because they thought "life" should be studied scientifically.
So to recap, there is a movement to "naturalize phenomenology" but really it means the movement to "naturalize consciousness", i.e. place consciousness within natural science. And anyone trying to do that implicitly has a personal theory of consciousness - they must have some concept of what it is. But not many of those people are self-consciously adherents to any of the theories of consciousness which historically are known as phenomenological. And of those who are, I think there would be considerably more enthusiasm for "existential phenomenology" than for "transcendental phenomenology".
This distinction goes back to the divide between Husserl and his student Heidegger. Husserl was a rationalist in an older, subjective sense and by temperament - he was interested in analytical thought and in the analytical study of analytical thought; the phenomenology of propositional thinking, for example. Heidegger was his best student, but he became obsessed with the phenomenology of "Being", which became a gateway for the study of angst, dread, the meaning of life, and a lot of other things that were a lot more popular and exciting than the intentional structure of the perception of an apple. The later Heidegger even thought that the best phenomenology is found in the poetic use of language, which makes some sense - such language evokes, it gets people to employ complex integrated systems of concepts which aren't so easy to specify in detail.
Meanwhile, Husserl's more rationalistc tendencies led towards transcendental phenomenology, which even among philosophers was widely regarded as misguided, the pursuit of a phantasmal "transcendental ego" that was (according to the criticism) an artefact produced by language or by religious metaphysics. Husserl literally fled Nazi Germany in order to continue his work (while Heidegger tried to accommodate himself to the sturm und drang of the regime) and died with only a few loyalists developing the last phase of his ideas. After the war, Heidegger was excoriated for his politics, but existential phenomenology remained culturally victorious.
If we come closer to the present and the age of cognitive science, there are now many people who are appreciative of Husserl's earlier analyses, but transcendental phenomenology is still mostly regarded as misguided and metaphysical. Existential phenomenology is also a somewhat exotic affiliation among scientists, but it does get some recognition among people who are into the importance of "embodiment" in cognitive science and consciousness studies. Husserl's phenomenology is so verbal and verbalizing, whereas existential phenomenology, in its attention to "raw existence", can lead (among other destinations) to a 1960s-style rediscovery of the senses, e.g. in Merleau-Ponty's phenomenology, and from there to the embodied cognition of Rodney Brooks et al.
So in the contemporary world, transcendental phenomenology is very obscure and mostly it's a subject of historical research. You could make the analogy between Husserl and Einstein, with transcendental phenomenology as Husserl's unified field theory. Einstein was regarded as a founder of modern physics but his later interests were regarded as misguided, and it's much the same with Husserl. But fifty years after Einstein's death, unified theories are a standard interest, it's just that they're quantum rather than classical. Similarly, it's likely that the spirit of transcendental phenomenology will be revived eventually.
↑ comment by Bugmaster · 2012-09-18T17:00:18.635Z · LW(p) · GW(p)
Since we seem to have direct experiential access to them as part of our subjective phenomenology, this suggests on Occamian grounds that they should not be as physically or ontologically complex as neurophysical brain states.
How so ? I don't follow your reasoning, and I'm not sure what you mean by "neurophysical brain states" -- are there any other kinds ? Ultimately, every human brain is made of neurons...
Replies from: Peterdjones, TheOtherDave↑ comment by Peterdjones · 2012-09-18T20:04:34.040Z · LW(p) · GW(p)
I didn't understand that either.
↑ comment by TheOtherDave · 2012-09-18T17:13:41.851Z · LW(p) · GW(p)
Ultimately, every human brain is made of neurons...
Not exclusively. There are glial cells, for example.
Replies from: Bugmastercomment by selylindi · 2012-09-14T14:05:56.451Z · LW(p) · GW(p)
You didn't actually dissolve the problem of qualia -- you just rationalized it away. The goal we like to aim for here in "dissolving" problems is not just to show that the question was wrongheaded, but thoroughly explain why we were motivated to ask the question in the first place.
If qualia don't exist for anyone, what causes so many people to believe they exist and to describe them in such similar ways? Why does virtually everyone with a philosophical bent rediscover the "hard problem"?
Replies from: common_law↑ comment by common_law · 2012-09-19T20:05:04.383Z · LW(p) · GW(p)
The goal we like to aim for here in "dissolving" problems is not just to show that the question was wrongheaded, but thoroughly explain why we were motivated to ask the question in the first place. ¶ If qualia don't exist for anyone, what causes so many people to believe they exist and to describe them in such similar ways? Why does virtually everyone with a philosophical bent rediscover the "hard problem"
I think this objection applies to Dennett or Churchland's account but not to mine. The reason the qualia problem is compelling, on my account, is that we have an innate intuition of direct experience. There is indeed some mystery about why we have such an intuition when, on the analysis I provide, the intuition seems to serve no useful purpose, but the answer to that question lies in evolution.
The only answer to "why we were motivated to ask the question?" is the answer to "why did evolution equip us with this nonfunctional intuition?" What other question might you have in mind?
A suggested answer to the evolutionary question is contained in another essay, "The supposedly hard problem of consciousness and the nonexistence of sense data: Is your dog a conscious being?".
But I don't follow that "merely showing a problem is wrongheaded" would be tantamount to "just [rationalizing] it away." You would be justified in declining to count a showing of wrongheadedness as a complete dissolution, but that doesn't make a demonstration of wrongheadedness unsound. The reasonable response to such a showing is to conclude that there are no qualia and then to look for the answers to why they seem compelling.
comment by Matt_Caulfield · 2012-09-14T05:33:12.972Z · LW(p) · GW(p)
A philosophers’ version is the “inverted spectrum”: how do I know you see “red” rather than “blue” when you see this red print?
That's a typo, right? It's blue print.
Replies from: Pentashagon↑ comment by Pentashagon · 2012-09-18T18:41:54.927Z · LW(p) · GW(p)
While humorous, this is actually a specious argument. We can agree to call light between 450–495 nm "blue" and light between 620–750 nm "red" and in fact most of us do. The real question is whether, despite our labels, we feel the experience of those wavelengths quite differently, in ways we can't adequately express via language.
comment by jsteinhardt · 2012-09-15T16:31:54.883Z · LW(p) · GW(p)
Removing a fundamental scientific mystery is a conceptual gain.
Removing it by claiming it doesn't exist seems suspicious to me. Especially given that it seems quite clear that I have qualia.
Replies from: Eliezer_Yudkowsky, None↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-15T17:17:15.995Z · LW(p) · GW(p)
(Sorry, couldn't resist.)
↑ comment by [deleted] · 2012-09-22T13:28:17.519Z · LW(p) · GW(p)
I agree that this isn't a method that should be used to "solve" scientific problems, but I don't think that is what this article attempts to do. Rather, the essay makes the case that the problem of qualia was never a scientific problem to begin with - it is an epistemological problem that requires an epistemological solution.
If somebody asks you, "what is the sound of one hand clapping", you don't reach for a tape recorder and start experimental trials. The correct response is to reply, "your question is absurd." Similarly, when presented with the problem of how the non-causal essence of experience could have physical effects, the solution isn't to find an answer, the solution is to dissolve the question. (At least, that's what the article argues and I agree.)
Epistemology here is acting as a filtering device to determine which questions are solvable scientifically. The qualia question has a nasty habit of slipping through the net.
comment by IlyaShpitser · 2012-09-14T07:40:05.140Z · LW(p) · GW(p)
A joke: there is in fact an empirical test for p-zombiehood: whether you agree with Dennett or not.
Replies from: aaronde↑ comment by aaronde · 2012-09-17T05:31:13.513Z · LW(p) · GW(p)
Well, I agree with Dennett, and I'm pretty sure I'm a p-zombie.
I mean, that's the whole point, right? That p-zombies aren't actually any different from real people?
Replies from: shminux, Mitchell_Porter↑ comment by Shmi (shminux) · 2012-09-17T05:53:25.649Z · LW(p) · GW(p)
Ding-ding!
↑ comment by Mitchell_Porter · 2012-09-17T06:18:19.753Z · LW(p) · GW(p)
A p-zombie doesn't feel pain; it just says it does, and it goes through the motions of being in pain. Does that sound like you? If we chop off your hand, will you not actually be feeling anything?
Replies from: aaronde↑ comment by aaronde · 2012-09-17T07:25:14.116Z · LW(p) · GW(p)
When people say that it's conceivable for something to act exactly as if it were in pain without actually feeling pain, they are using the word "feel" in a way that I don't understand or care about. So, sure: I don't feel pain in that sense. That's not going to stop me from complaining about having my hand chopped off!
Replies from: Peterdjones, randallsquared↑ comment by Peterdjones · 2012-09-18T20:56:14.141Z · LW(p) · GW(p)
OK. But you're using "feel" in a sense I don't understand.
Replies from: aaronde↑ comment by aaronde · 2012-09-18T22:10:30.979Z · LW(p) · GW(p)
As far as I know, to feel is to detect, or perceive, and pain is positive punishment, in the jargon of operant conditioning. So to say "I feel pain" is to say that I detect a stimulus, and process the information in such a way that (all else equal) I will try to avoid similar circumstances in the future. Not being a psychologist, I don't know much more about pain. But (not being a psychologist) I don't need to know more about pain. And I reject the notion that we can, through introspection, know something more about what it "is like" to be in pain.
I believe it's unethical to inflict pain on people (or animals, unnecessarily), because to hold something in a state of pain is to frustrate its goals. I don't think that it is any qualia associated with pain that makes it bad. Indeed, this seems to lead to morally repugnant conclusions. If we could construct a sophisticated intelligence that can learn by operant conditioning, but somehow remove the qualia, does it become OK to subject it to endless punishment?
Replies from: Peterdjones, 9eB1↑ comment by Peterdjones · 2012-09-20T15:15:29.635Z · LW(p) · GW(p)
I don't think we have to argue whether it is the goal-frustration or the pain-quale that is the bad. They are both bad. I don't want to have my goals frustrated painlessly, and I don't want to experience pain even in ways that promote my goals, such as being cattle-proded every time I slip into Akrasia.
And I reject the notion that we can, through introspection, know something more about what it "is like" to be in pain.
It would have been helpful to say why you reject it. If you were in a Mary-style experiment, whre you studied pain whilst being anaesthetised from birth, would you maintinan that personally experiencing pain for the first time would teach you nothing?
Replies from: aaronde↑ comment by aaronde · 2012-09-20T21:11:39.708Z · LW(p) · GW(p)
I don't want to experience pain even in ways that promote my goals
Don't you mean that avoiding pain is one of your goals?
It would have been helpful to say why you reject it.
It just seems like the default position. Can you give me a reason to take the idea of qualia seriously in the first place?
would you maintain that personally experiencing pain for the first time would teach you nothing?
Yes.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T00:56:26.268Z · LW(p) · GW(p)
Don't you mean that avoiding pain is one of your goals?
Yes. Because pain hurts.
[Qualia denial] just seems like the default position. .Can you give me a reason to take the idea of qualia seriously in the first place?
Yes. My pains hurt. My food tastes. Voices and music sound like something.
Do you go drink the wine or just read the label? Do you go on holiday or just read the brochure?
Replies from: aaronde↑ comment by aaronde · 2012-09-21T17:19:32.094Z · LW(p) · GW(p)
My pains hurt. My food tastes. Voices and music sound like something.
Um, those are all tautologies, so I'm not sure how to respond. If we define "qualia" as "what it feels like to have a feeling", then, well - that's just a feeling, right? And "qualia" is just a redundant and pretentious word, whose only intelligible purpose is to make a mystery of something that is relatively well understood (e.g: the "hard problem of consciousness"). No?
Erm, sorry for the snark, but seriously: has talk of qualia, as distinct from mere perceptions, ever achieved any useful or even interesting results? Consciousness will continue to be a mystery to people as long as they refuse to accept any answers - as long as they say: "Okay, you've explained everything worth knowing about how I, as an information processing system, perceive and respond to my environment. And you've explained everything worth knowing about how I perceive my own perceptions of my environment, and perceive those perceptions, and so on ad infinitum - but you still haven't explained why it feels like something to have those perceptions."
Do you go drink the wine or just read the label? Do you go on holiday or just read the brochure?
Ha! That's actually not far off. But it's because I'm a total nerd who tries to eat healthy and avoid unnecessary expenses - not because of how I feel about qualia. I think that happiness should be a consequence of good things happening, not that happiness is a good thing in itself. So I try to avoid doing things (like drugs) that would decouple my feelings from outcomes in the real world. In fact, if I just did whatever I felt like at any given time, I would end up even less outgoing - less adventurous.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T17:45:40.064Z · LW(p) · GW(p)
My pains hurt. My food tastes. Voices and music sound like something.
Um, those are all tautologies, so I'm not sure how to respond.
a) I thought you were denying "pains hurt" b) "food tastes" isn't. c) The others can be rephrased as "injuries hurt" and "atmospheric compression waves sound like something".
If we define "qualia" as "what it feels like to have a feeling", then, well - that's just a feeling, right? And "qualia" is just a redundant and pretentious word, whose only intelligible purpose is to make a mystery of something that is relatively well understood (e.g: the "hard problem of consciousness"). No?
d) All words are inidivdually redundant e) If you think you can make the Hard Problem easy by tabooing "qualia", lets see you try.
but you still haven't explained why it feels like something to have those perceptions."
Well, you haven't. And there is something.
Do you go drink the wine or just read the label? Do you go on holiday or just read the brochure?
So I try to avoid doing things (like drugs) that would decouple my feelings from outcomes in the real world.
Do you send disadvantaged kids to Disneyland, or just send them the brochure? Even if you don't personally care about experiencing things for yourself, it is difficult to see how you could ignore its importance in your "good outcomes".
Replies from: aaronde↑ comment by aaronde · 2012-09-21T20:18:11.722Z · LW(p) · GW(p)
I thought you were denying "pains hurt"
Not at all. I'm denying that there is anything left over to know about pain (or hurting) after you understand what pain does. As my psych prof. pointed out, you often see weird circular definitions of pain in common usage, like "pain is an unpleasant sensation". Whereas psychologists use functional definitions, like "a stimulus is painful, iff animals try to avoid it". I believe that the latter definition of pain is valid (if simplistic), and that the former is not.
If you think you can make the Hard Problem easy by tabooing "qualia", lets see you try.
I did that here, on another branch of this conversation. Again, this is simplistic, probably missing a few details, maybe slightly wrong. But I find it implausible that there is a huge, important aspect of what it is to be in pain that this completely misses.
Do you send disadvantaged kids to Disneyland, or just send them the brochure?
Depends on the kid. I would have preferred a good book to Disneyland (I don't like crowds or roller coasters). Again, it's about preferences, not qualia. And what someone prefers is simply what they would choose, given the option. (And if we want to get into CED, it's what they would choose, given the option, and unlimited time to think about it, etc...)
Even if you don't personally care about experiencing things for yourself...
Woah, did I say that? Just because I don't value feelings in themselves doesn't mean that I can't care about anything that involves feelings. There's no meta-ethical reason, for example, why I can't prefer to have a perpetual orgasm for the rest of my life. I just don't. On the other hand, I am a big fan of novelty. And if novel things are going to happen, then something has to do them. That thing may as well be me. And to do something is to experience it. There is no distinction. So I certainly want to experience novel things.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-23T13:39:15.466Z · LW(p) · GW(p)
As my psych prof. pointed out, you often see weird circular definitions of pain in common usage, like "pain is an unpleasant sensation". Whereas psychologists use functional definitions, like "a stimulus is painful, iff animals try to avoid it". I believe that the latter definition of pain is valid (if simplistic), and that the former is not.
I don't have to like either definition, and I don't. The second definition attempts to define pain from outside behaviour, and therefore misses the central point of a feeling--that it feels like something, subjectively, to the organism having it. Moreover, it is liable to over-extend the definition of pain. Single celled organisms can show avoidant behaviour, but it is doubtful that they have feelings.
Putting things on an objective basis is often and rightly seen as a Good Thing in science, but when what you are dealing with is subjective, a problem is brewing..
But I find it implausible that there is a huge, important aspect of what it is to be in pain that this completely misses.
I find it obvious that there is a huge, important aspect of what it is to be in pain that that completely misses. There is nothing there that deals at all, in any way, with any kind of subjective feeling or sensaton whatsoever. You have decided that pain is a certain kind of behaviour dsiaplyed by entities pother than yourself and seen from the outside, and you have coded that up.
I inspect the code, and find nothing that relates in any way to how I introspect pain or any other feeling.
But I suspect we will continue to go round in circles on this issue until I can persuade you to make the paradigm shift into thinking about subjective feelings from the POV of your own subjectivity.
Again, it's about preferences, not qualia.
It's about both, because you can't prefer to personally have certain experiences if there is no such thing as subjective experience.
And to do something is to experience it.
Would you want to go on a holiday, or climb a mountain, and then have your memories of the expereince wiped? You would still have done it.
Replies from: aaronde↑ comment by aaronde · 2012-09-23T14:51:23.840Z · LW(p) · GW(p)
You're right, we're starting to go around in circles. So we should wrap this up. I'll just address what seems to be the main point.
I find it obvious that there is a huge, important aspect of what it is to be in pain that [your definition] completely misses.
This is the crux of our disagreement, and is unlikely to change. But you still seem to misunderstand me slightly, so maybe we can still make progress.
You have decided that pain is a certain kind of behaviour displayed by entities other than yourself and seen from the outside, and you have coded that up.
No, I have decided that pain is any stimulus - that is, a feeling - that causes a certain kind of behavior. This is not splitting hairs. It is relevant, because you keep telling me that my view doesn't account for feelings, when it is all about feelings! What you really mean is that my view doesn't account for qualia, which really just means I'm being consistent, because I don't believe in qualia.
you can't prefer to personally have certain experiences if there is no such thing as subjective experience.
Here for example, you seem to be equivocating between "experience" and "subjective experience". If "subjective experience" means the same thing as "experience", then I don't think there is no such thing as subjective experience. But if "subjective experience" means something different, like "qualia", then this statement doesn't follow at all.
P.S. This may be off-point, but I just have to say, this:
I inspect the code, and find nothing that relates in any way to how I introspect pain or any other feeling.
...is because the code has no capacity for introspection - not because it has no capacity for pain.
Edit: maybe this last point presents room for common ground, like: "Qualia is awareness of ones own feelings, and therefore is possessed by anything that can accurately report on how it is responding to stimuli."?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-23T15:03:46.259Z · LW(p) · GW(p)
No, I have decided that pain is any stimulus - that is, a feeling
I don't accept that all stimuli are feelings. A thermostat is stimulated by changes in temperature, but I don't think it feels the cold.
- that causes a certain kind of behavior. This is not splitting hairs. It is relevant, because you keep telling me that my view doesn't account for feelings, when it is all about feelings!
It is about "feelings" as you define the word, which is not general usage.
What you really mean is that my view doesn't account for qualia, which really just means I'm being consistent, because I don't believe in qualia.
Which is itslef consistent with the fact that your "explanations" of feelign invariabel skirt the central issues.
However, I am never goign to be able to provide you with objective proof of subjective feelings. It is for you to get out of the loop of denying subjectivity because it is not objective enough.
If "subjective experience" means the same thing as "experience", then I don't think there is no such thing as subjective experience. But if "subjective experience" means something different, like "qualia", then this statement doesn't follow at all.
"subjective experience" means "exprerience" and both mean the same thing as "qualia". Which is to say, it is incoherent to me that you could deny qualia and accept experience.
...is because the code has no capacity for introspection - not because it has no capacity for pain.
I don't think introspection is sufficient for feeling, since I can introspect thought as well.
Replies from: aaronde↑ comment by aaronde · 2012-09-23T17:17:13.202Z · LW(p) · GW(p)
Okay, I've tabooed my words. Now it's your turn. What do you mean by "feeling"?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T19:28:25.965Z · LW(p) · GW(p)
The conscious subjective experience of a sensation or emotion.
Replies from: aaronde↑ comment by aaronde · 2012-09-24T23:01:55.046Z · LW(p) · GW(p)
How do I know whether I am having a conscious subjective experience of a sensation or emotion?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-24T23:18:37.565Z · LW(p) · GW(p)
You're conscious. Being conscious of things kind of goes with the territory.
Replies from: aaronde↑ comment by aaronde · 2012-09-25T01:18:54.254Z · LW(p) · GW(p)
I also think that I am conscious, but you keep telling me I have the wrong definitions of words like this, so I don't know if we agree. I would say being conscious means that some part of my brain is collating data about my mental states, such that I could report accurately on my mental states in a coherent manner.
↑ comment by 9eB1 · 2012-09-19T04:46:33.833Z · LW(p) · GW(p)
If someone offered me a pill that would merely reduce my qualia experience of pain I would take it, even if it still triggered in me a process of information that would cause me to try to avoid similar circumstances in the future, and even if it were impossible to tell observationally that I had taken it, except by asking about my qualia of experiencing pain and other such philosophical topics. That is, if I am going to writhe in agony, I would prefer to have my mind do it for me without me having to experience the agony. If I'm going to never touch a hot stove because of one time when I burned me, I'd prefer to do that without having the memory of the burn. This idea is not malformed, given what we know about the human brain's lack of introspection on it's actions.
I believe it's unethical to inflict pain on people (or animals, unnecessarily), because to hold something in a state of pain is to frustrate its goals. I don't think that it is any qualia associated with pain that makes it bad.
In practice it seems that the only reason that it frustrates a person's goals to receive pain is because they have a goal, "I don't want to be in pain." There are certainly reasons that the pain is adaptive, but it certainly seems from the inside like the most objectionable part is the qualia. If the sophisticated intelligence HAS qualia but doesn't have as a goal avoidance of pain, that suggests your ethical system would be OK to subject it to endless punishment (a sentiment with which I may agree).
Replies from: Richard_Kennaway, aaronde↑ comment by Richard_Kennaway · 2012-09-19T13:23:49.654Z · LW(p) · GW(p)
If someone offered me a pill that would merely reduce my qualia experience of pain I would take it
Morphine is said to have this effect. Some people who have been prescribed it for pain say that they still feel the pain but it doesn't hurt. But it's illegal in most places except for bona fide medical purposes.
↑ comment by aaronde · 2012-09-19T07:57:47.404Z · LW(p) · GW(p)
I think that split-brain study shows the opposite of what you think it shows. If you observed yourself to be writhing around in agony, then you would conclude that you were experiencing the qualia of pain. Try to imagine what this would actually be like, and think carefully about what "trying to avoid similar circumstances in the future" actually means. You can't sit still, can't think about anything else. You plead with anyone around to help you - put a stop to whatever is causing this - insisting that they should sympathize with you. The more intense the pain gets, the more desperate you become. If not, then you aren't actually in pain (as I define it) because you aren't trying very hard to avoid the stimulus. I'd sympathize with you. Are you saying you wouldn't sympathize with yourself?
BTW, how do you think I'd respond, if subjected to pain and asked about my "qualia"? By this reasoning, is my pain irrelevant?
In practice it seems that the only reason that it frustrates a person's goals to receive pain is because they have a goal, "I don't want to be in pain."
I think you have the causation backwards. Pain causes a person to acquire the goal of avoiding whatever the source of the pain is, even if they didn't have that goal before. (Think about someone confidently volunteering to be water-boarded to prove a point, only to immediately change his mind when the torture starts.) That's how I just defined pain above. That's all pain is, as far as I know. Of course, in animals, the pain response happens to be associated with a bunch of biological quirks, but we could recognize pain without those minutiae.
If the sophisticated intelligence HAS qualia but doesn't have as a goal avoidance of pain, that suggests your ethical system would be OK to subject it to endless punishment (a sentiment with which I may agree).
Well, you just described an intelligence that doesn't feel pain. So it doesn't make sense to ask whether it would be OK to inflict pain on it. Could you clarify what it would mean to punish something that has no desire to avoid the punishment?
↑ comment by randallsquared · 2012-09-18T18:51:53.848Z · LW(p) · GW(p)
When people say that it's conceivable for something to act exactly as if it were in pain without actually feeling pain, they are using the word "feel" in a way that I don't understand or care about.
Taken literally, this suggests that you believe all actors really believe they are the character (at least, if they are acting exactly like the character). Since that seems unlikely, I'm not sure what you mean.
Replies from: aaronde↑ comment by aaronde · 2012-09-18T22:11:07.214Z · LW(p) · GW(p)
If an actor stays in character his entire life, making friends and holding down a job, in character - and if, whenever he seemed to zone out, you could interrupt him at any time to ask what he was thinking about, and he could give a detailed description of the day dream he was having, in character...
Well then I'd say the character is a lot less fictional than the actor. But even if there is an actor - an entirely different person putting on a show - the character is still a real person. This is no different from saying that a person is still a person, even if they're a brain emulation running on a computer. In this case, the actor is the substrate on which the character is running.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-19T03:07:04.950Z · LW(p) · GW(p)
So would you say video game characters "feel" pain?
Replies from: aaronde↑ comment by aaronde · 2012-09-19T07:59:39.168Z · LW(p) · GW(p)
Probably some of them do (I don't play video games). But they aren't even close to being people, so I don't really care.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-09-19T09:42:46.472Z · LW(p) · GW(p)
Would you say a thermostat feels pain when it can't adjust the temperature towards its preferred setting? Otherwise you might have some strange ideas about the complexity of video game characters. There's a very long way to go in internal complexity from a video game character to, say, a bacterium.
Replies from: aaronde↑ comment by aaronde · 2012-09-19T19:59:13.235Z · LW(p) · GW(p)
I don't think a program has to be very sophisticated to feel pain. But it does have to exhibit some kind of learning. For example:
.def wanderer (locations, utility, X):
..while True:
.
...for some random l1, l2 in locations:
....if utility[l1] < utility[l2]:
.....my_location = l2
....else:
.....my_location = l1
.
...if X(my_location, current_time):
....utility[my_location] = utility[my_location] - 1
.
...current_time = current_time + 1
This program aimlessly wanders over a space of locations, but eventually tends to avoid locations where X has returned True at past times. It seems obvious to me that X is pain, and that this program experiences pain. You might say that the program experiences less pain than we do, because the pain response is so simple. Or you might argue that it experiences pain more intensely, because all it does is implement the pain response. Either position seems valid, but again it's all academic to me, because I don't believe pain or pleasure are good or bad things in themselves.
To answer your question, a thermostat that is blocked from changing the temperature is frustrated, not necessarily in pain. Although, changing the setting on a working thermostat may be pain, because it is a stimulus that causes a change in the persistent behavior a system, directing it to extricate itself from its current situation.
(edit: had trouble with indentation.)
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-16T19:14:59.625Z · LW(p) · GW(p)
Moved to Discussion.
comment by RomeoStevens · 2012-09-14T02:56:56.977Z · LW(p) · GW(p)
Could we possibly have a normal formatted version? I feel like I'm being sold a diet pill.
Replies from: Alicorn↑ comment by Alicorn · 2012-09-14T03:53:33.715Z · LW(p) · GW(p)
I've replaced all the red (except the first instance, where it serves a purpose) with regular bold.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-09-14T04:09:54.856Z · LW(p) · GW(p)
"You think you replaced it with bold, but that's just a belief. Boldness qualia don't actually exist."
comment by [deleted] · 2012-09-14T08:50:38.111Z · LW(p) · GW(p)
First, let us make it clear that when you see red, your brain does not store pixels with high R-value in little RGB colour points.
I conjecture that the brain has some highly efficient storage formats for visuals, which is evident in the fact that people untrained in visual arts all have very symbol-centric expressive forms. You do not store a high fidelity vector-graphics image of a red sports car when you see one; you probably store the symbol car, the colour red, the feelings associated with the car-brand, some sense of 'sleekness' and many other paintbrush handles.
Absence of qualia in the paintbrush handles is evident in language; most people agree that the sky on a clear day is some shade of blue. "Blue" is universal, but blue certainly isn't. Whatever feelings and images the colour blue conjures up in your internal narrative is your personal experience of blue.
It all ties up to the broader concept of the way the brain works when we are empathic. Parts of the brain associated with the experience of colour (visual cortex? I am not good with neurology) can be lit up both by stimuli from the retinal nerve, AND by stimuli from imagination or conversation.
I have a lot of intuits on this that are hard to verbalize.
comment by selylindi · 2012-09-14T15:38:42.491Z · LW(p) · GW(p)
How do bumping beer cans jointly experience the subjective taste of a strawberry? How can a soul push cations across bilipid membranes? Neither materialist nor non-materialist answers seem to be adequate, which does suggest that there's a problem here that needs dissolving more than it needs solving. In the absence of adequate evidence, my preferred hypothesis is a kind of neutral monism.
I look in front of me and see a purple box. By any of a variety of possible causes, my attention is brought to bear on my current action, and I notice that I am looking in front of me and seeing a purple box. My working memory happens to be large enough to admit both active neural pathways at once: seeing the purple box and noticing that I see it. The overlap between the active neural pathways is large but not total, and in some key places there are "coincidence detector" neurons that take input from both pathways and fire when both are active in a short interval. The information sent by the coincidence detectors contains what? -- it contains information that says I'm aware of and having an experience of a purple box. And given that input to my decision processes, I can act, not only on the sight of the purple box, but on a very curious bit of information we can call the subjective awareness of the purple box.
On this account, qualia is nearly epiphenomenal, in that we can act on the fact of its existence but not on its character (i.e. it remains ineffable).
Replies from: fubarobfusco, Oscar_Cunningham↑ comment by fubarobfusco · 2012-09-15T17:50:48.926Z · LW(p) · GW(p)
By any of a variety of possible causes, my attention is brought to bear on my current action, and I notice that I am looking in front of me and seeing a purple box.
What sort of a thing am "I" that the expression "my attention" refers to anything? What am I, that I can possess an attention? Do I have it in the way I have hands, or in the way I have the recollection that 17 × 2 = 34? Can I sometimes have two attentions, or zero, or half of one?
Replies from: selylindi, Peterdjones↑ comment by Peterdjones · 2012-09-18T20:47:52.336Z · LW(p) · GW(p)
I would love to be able to comment on the degree of meaningfullness, truth, well-informedness, originality and clarity of you comment, but I find myself suddenly confused by what sort of things meaningfullness, truth, well-informdness, originality and clarity actually are. Do you have one or zero of them...?
↑ comment by Oscar_Cunningham · 2012-09-14T23:26:18.622Z · LW(p) · GW(p)
On this account, qualia is nearly epiphenomenal, in that we can act on the fact of its existence but not on its character (i.e. it remains ineffable).
The rest of your comment was great, but I lost you at the last sentence, could you re-express it?
Replies from: selylindi↑ comment by selylindi · 2012-09-20T22:19:06.590Z · LW(p) · GW(p)
A theory about qualia is that they're epiphenomena, which I interpret to mean that causation goes only one way (from physical events to qualia), not both ways. I used to immediately reject that theory because we're physically discussing qualia. But then I speculatively proposed the neural argument above, and realized I was wrong. We only ever discuss the fact that we have qualia. We don't discuss the content of the qualia themselves. In fact it seems we can't discuss the raw experienced content of the qualia. So maybe they are very nearly epiphenomenal, with one niggling exception that the facts of their existence are apparently causally linked both directions (perhaps as explained by that putative neural mechanism).
Um, that might still be badly expressed, but it's my best effort. If it still doesn't work, then the whole idea is probably badly formed.
Perhaps a differently evolved or designed neural architecture could discuss the content of qualia. We might simply lack the wiring for it.
comment by Shmi (shminux) · 2012-09-14T05:46:47.382Z · LW(p) · GW(p)
Wasn't the issue rather adequately addressed in the Sequences? Why a sequel?
comment by Spinning_Sandwich · 2012-09-14T10:15:09.966Z · LW(p) · GW(p)
There are two traditional problems associated with colors. One is the sort that pseudo-philosophical douchebags take to: "Dude, what if no one really sees the same colors?" The other was very popular in the heyday of classical analytic philosophy: how can we say that Red is Not-Blue analytically if they are empirical & presumably a posteriori data?
Let's assume for the sake of getting to the real argument that consciousness arises from physical matter in a manner uncontroversial for the materialist. Granting this, why do we all see the same colors, if we do?
The short answer is that we probably don't. I don't even see with the same level of clarity that someone with 20/20 vision does, at least not without the help of my glasses, which themselves introduce a level of optical distortion not significant to my brain's processing but certainly significant in a [small] geometric sense.
A quicker way to get at the fact that we probably don't see quite the same way is to point out that dogs' eyes aren't responsive to certain colors which most human eyes can distinguish quite easily. This leads directly to the point that there is probably enough biological variation (& physical deterioration over someone's lifetime) that we don't end up with quite the same picture of the world, even though it's evidently close enough that we all get along all right.
This also leads to the strongest argument (for empirical scientists anyhow) that we do all see roughly the same thing: we've got pretty much the same sensory organs & brains to process what is roughly the same data. It seems reasonable to expect that most members of a given species should experience roughly the same picture of the world.
So much for the first problem, at least in brief & from a pragmatic point of view. The skeptical philosopher must admit that this is a silly problem to demand a decisive answer to.
As for the problem of distinguishing between colors analytically, of determining a priori the truth of empirical statements, a mathematical concept is quite helpful, particularly if we're willing to grant that colors are induced by a spectrum of wavelengths which the eye can perceive. But even if we don't grant that last fact, introducing the notion of a partition suffices to distinguish the perceived colors (or qualia) inasmuch as it also divides up the spectrum of wavelengths which induce those colors.
Note that this doesn't help us escape the fact that we require experience to learn of the various colors & the fact that they form a partition, but that isn't the crux of the problem to begin with. In the same way that we can learn what a round table is & deduce that it is a table analytically, once we become acquainted with the colors & their structure---that is, once we understand the abstract rules governing partitions---we can make analytic claims based only on that structure we understand, and not requiring any further empirical data, or really even the empirical components of the original data.
Replies from: CCC, Peterdjones, common_law↑ comment by CCC · 2012-09-14T10:43:17.068Z · LW(p) · GW(p)
Granting this, why do we all see the same colors, if we do?
I can quickly and easily prove that some people see colours in a different way to the way that I do.
To my eyes, red and green are visibly and obviously distinct. I cannot look at one and consider it to be the other. Yet, red-green colour blindness is the most common version of colourblindness; these people must see either red, or green, or both in some way differently to the way that I see these colours.
Replies from: ArisKatsaris, Spinning_Sandwich↑ comment by ArisKatsaris · 2012-09-14T14:49:57.193Z · LW(p) · GW(p)
I think you are confusing the word "color" that identifies a certain type of visual experience, with the word "color" that identifies a certain set of light-frequencies. This is much like confusing the word "sound" which means "auditory experience", with the word "sound" which means "acoustic vibrations".
You see certain frequencies in a different way than people with red-green colour blindness; in short these frequencies lead to different qualia, different visual experiences. That's rather obvious and rather useless in discussing the deeper philosophical point.
But to say that you experience certain visual experiences differently than others experience them, may even be a contradiction in terms -- unless it's meant that the atomic qualia trigger in turn different qualia (e.g. different memories or feelings) in each person. Which is probably also trivially true...
Replies from: CCC, Spinning_Sandwich↑ comment by CCC · 2012-09-15T11:01:59.647Z · LW(p) · GW(p)
Apologies for the confusion.
Your second paragraph encapsulates the point I intended to convey; that given frequencies of light create in my mind qualia that differ from the qualia created by the same frequency of light in the mind of a red-green colourblind person.
↑ comment by Spinning_Sandwich · 2012-09-14T22:53:44.141Z · LW(p) · GW(p)
On the common sense view that qualia are the kolors generated by our minds, which do so based on sensory input about the colors in the world, it makes sense that color-to-kolor conversion (if you will) should be imperfect even among people with properly functioning sight.
Its possible my writing wasn't clear enough to convey this point (or that you were objecting to CCC, not me), but I was getting at the idea that we probably do experience slightly different kolors. It was never my intention to be philosophically "rigorous" about that, just to raise the point.
↑ comment by Spinning_Sandwich · 2012-09-14T11:01:26.314Z · LW(p) · GW(p)
You'll notice that the next few sentences of my post address this same idea for fully functional members of different species. But it doesn't technically refute the claim for qualia, only that we're not all equally responsive to the same stimuli.
It is, for example, technically possible (in the broadest sense) that color-blind people experience the same qualia we do, but they are unable to act on them, much in the same way that a friend with ADD might experience the same auditory stimuli I do, but then is too distracted to actually notice or make sense of it.
I note, however, that the physical differences in color-blindness (or different species' eyes) are enough reason to lend little credibility to this idea.
↑ comment by Peterdjones · 2012-09-14T10:33:09.409Z · LW(p) · GW(p)
I'm not sure what the prolem of distingusihing colours analytically is supposed to relate to. The classic modern argument, Mary's Room attempts to demonstrate that the subjective sensation of colour is a problem of materialism, because on can conceviably know everything about the neuroscience of colour perception without knowing anything about how colours look. That could sort-of be re-expressed by saying Mary can't analytically deduce colour sensations from the information she has. And it is sort-of true that once you have a certain amount of experiential knowledge of colour space, you could gues the nature of colours you haven't personally seen. But that isn't very relevant to M's R because she is stipulated as not having seen any colours. So, overall, I don't see what you are getting at.
Replies from: Lightwave, None, Spinning_Sandwich↑ comment by Lightwave · 2012-09-17T08:30:52.311Z · LW(p) · GW(p)
You can also know all relevant facts about physics but still not "know" how to ride a bicycle. "Knowing" what red looks like (or being able to imagine redness) requires your brain to have the ability to produce a certain neural pattern, i.e. execute a certain neural "program". You can't learn how to imagine red the same way you learn facts like 2+2=4 for the same reason you can't learn how to ride a bike by learning physics. It's a different type of "knowledge", not sure if we should even call it that.
Edit (further explanation): To learn how to ride a bike you need to practice doing it, which implements a "neural program" that allows you to do it (via e.g. "muscle memory" and whatnot). Same for producing a redness sensation (imagining red), a.k.a "knowing what red looks like".
Replies from: Peterdjones, Eugine_Nier↑ comment by Peterdjones · 2012-09-18T20:17:26.543Z · LW(p) · GW(p)
Knowing" what red looks like (or being able to imagine redness) requires your brain to have the ability to produce a certain neural pattern, i.e. execute a certain neural "program"
Maybe. But, if true, that doesn't mean that red is know-how. I means that something like know-how is necessary to get knowlege-by-acquaintance with Red. So it still doesn't show that Red is know-how in itself. (What does it enable you to do?)
Replies from: Lightwave↑ comment by Lightwave · 2012-09-19T09:56:28.383Z · LW(p) · GW(p)
So it still doesn't show that Red is know-how in itself.
Talking about "red in itself" is a bit like talking about "the-number-1 in itself". What does it mean? We can talk about the "redness sensation" that a person experiences, or "the experience of red". From an anatomical point of view, experiencing red(ness) is a process that occurs in the brain. When you're looking at something red (or imagining redness), certain neural pathways are constantly firing. No brain activity -> no redness experience.
Let's compare this to factual knowledge. How are facts stored in the brain? From what we understand about the brain, they're likely encoded in neuronal/synaptic connections. You could in principle extract them by analyzing the brain. And where is the (knowledge of) red(ness) stored in the brain? Well there is no 'redness' stored in the brain, what is stored are (again in synaptic connections) instructions that activate the color-pathways of the visual cortex that produce the experience of red. See how the 'knowledge of color' is not quite like factual knowledge, but rather looks like an ability?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T13:45:10.922Z · LW(p) · GW(p)
An ability to do what?
You argue as if involving neuronal activation is sufficient evidence that something is an ability. But inabilities are as neuronal as abilitites. If someone becomes incapably drunk, that is as much as matter of neuronal activity as anything else. But in common sense terms, it is loss of ability, not acquisition of an ability.
In an case, there are plenty of other obections to the Ability Hypothesis
↑ comment by Eugine_Nier · 2012-09-18T00:32:10.028Z · LW(p) · GW(p)
Both riding a bike or seeing red involves the brain performing I/O, i.e., interacting with the outside world, whereas learning that 2+2=4 can be done without such interaction.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-09-18T01:32:26.283Z · LW(p) · GW(p)
whereas learning that 2+2=4 can be done without such interaction.
One might imagine so, but I expect there are no examples of it ever happening.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T20:18:09.462Z · LW(p) · GW(p)
There are plenty of examples of less basic apriori truths being figured out once the basics are in place.
↑ comment by [deleted] · 2012-09-22T16:12:19.786Z · LW(p) · GW(p)
Mary's room is an interesting one. I think there's a valid rebuttal to it, though, but it takes quite a bit of explanation so hold onto your hats, ladies and gentlemen, and if you're not interested then feel free to ignore. I should stress that this is an argument of my own formulation, although it is informed by my readings of a bunch of other philosophers, and that therefore it is entirely possible that people who share my conclusions might disagree with my premises or form of argument. I'm not trying very hard to convince anyone with this post, just putting the argument out there for your inspection. <-- (EDIT: left the word "not" out of this sentence the first time. Whoops!)
The hard-materialist, anti-qualian, functionalist argument is that sensation ≡ brain state. That is, "for one's brain to be in the brain-state which is produced when red light hits one's retina is to experience redness". Once you've experienced redness a few times, it is to possible to intentionally assume that "red" brain-state, so it is possible to remember what it is like to see red without actually having to be exposed to red light. We call this "knowing what red is like".
Mary, unfortunately, has grown up in a colour-free environment, so she has never experienced the brain-state that is "seeing red", and even if her brain had drifted through that state accidentally, she wouldn't have known that what she was experiencing was redness. She can't find her way to the state of redness because she has never been there before. When she starts researching in an attempt to figure out what it is like to see red, her descriptive knowledge of the state will increase - she'll know which sets of neurons are involved, the order and frequency of their firings, etc - but of course this won't be much help in actually attaining a red brain-state. Hearing that Paris is at 48.8742° N, 2.3470° E doesn't help you get there unless you know where you are right now.
Mary's next step might be to investigate the patterns that instantiate sensations with which she is familiar. She might learn about how the smell of cinnamon is instantiated in the brain, or the feeling of heat, etc, etc, and then attempt to "locate" the sensation of red by analogy to these sensations. If you know where you are relative to Brisbane, and you know where Brisbane is relative to Paris, then you can figure out where you are relative to Paris.
This workaround would be effective if she were trying to find her way to a physical place, because on Earth you only need 3 dimensions to specify any given location, and it's the same 3 dimensions every time. Unfortunately, the brain is more complicated. There are some patterns of neural behaviour which are only active in the perception of colour, so while analogy to the other senses might allow Mary zero in a little closer to knowing what red is like, it wouldn't be nearly enough to solve her problem.
Luckily, Mary is a scientist, and where scientists can't walk they generally invent a way to fly. Mary knows which neurons would are activated when people see red, and she knows the manner of their activation. She can scan her head and point to the region of her brain that red light would stimulate. So why does she need red light? Synesthetes regularly report colour experiences being induced by apparently non-coloured stimuli, and epileptics often experience phantom colours before fits. Ramachandran and Hubbard even offer a report of a colour-blind synesthete who experiences what he calls "Martian colours" - colours which he has never experienced in the real world and which therefore appear alien to him (PRSL, 2001). So, Mary CAN know red, she just has to induce the brain state associated with redness in herself. Maybe she uses transcranial electrostimulation. Maybe she has to resort to wireheading (http://wiki.lesswrong.com/wiki/Wireheading). Maybe all she needs to do is watch a real-time brain scan while she meditates, so she can learn to guide herself into the right state the same way that people who already "know" red get to it. Point is, if Mary is at all dedicated, she's going to end up understanding red.
Of course, some qualians might argue that this misses the point - if Mary induces an experience of redness then she's still exposing herself to the quale of red, whether or not there was any red light involved, so Mary hasn't come to her knowledge solely by physical means. I think that skirts dangerously close to begging the question, though. As I've mentioned above, the functionalist view of colour holds that to know what it is like to see red is just "to know how to bring about the brain-state associated with redness in oneself". It seems unfair to say that Mary has to be able possess that knowledge but never use it in order for functionalists to be proved right - you might as well request that she know what an elephant looks like without ever picturing one in her mind. Regardless, the Mary's Room thought experiment presupposes that Mary can't experience the quale of red in her colourless environment. If qualians want to argue that inducing the brain state of red exposes Mary to the quale of red, then the thought experiment doesn't do what it was supposed to, and therefore can't prove what it was designed to prove.
Anyway, I'd say that was my two cents but looking at how much I've typed it's probably more like fifteen dollars...
↑ comment by Spinning_Sandwich · 2012-09-14T11:11:36.297Z · LW(p) · GW(p)
It's just another cool problem about colors.
As far as Mary's Room goes, you might similarly argue that you could have all of the data belonging to Pixar's next movie, which you haven't seen yet, without having any knowledge of what it looks like or what it's about. Or that you can't understand a program without compiling it & running it.
I'm not entirely sure how much credibility I lend to that. There are some very abstract things (fairly simple, yes) which I can intuit without prior experience, and there are many complicated things which I can predict due to a great deal of prior experience (eg landscapes described in novels).
But I mostly raised it as another interesting problem with a proposed [partial] solution.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-14T11:21:49.504Z · LW(p) · GW(p)
As far as Mary's Room goes, you might similarly argue that you could have all of the data belonging to Pixar's next movie, which you haven't seen yet, without having any knowledge of what it looks like or what it's about
I dont see how you could fail to be able to deduce what it is about, given Mary's supercientific powers.
Or that you can't understand a program without compiling it & running it.
Ordinary mortals can, in simple cases, and Mary presumably can in any case.
Or that you can't understand a program without compiling it & running it.
You''re not a superscientist. Can I recommend reading the linked material?
Replies from: Spinning_Sandwich↑ comment by Spinning_Sandwich · 2012-09-14T11:27:34.495Z · LW(p) · GW(p)
It's possible I already had & that you're misunderstanding what my examples are about: the difference between the physical/digital/abstract structure underlying something & the actual experience it produces (eg qualia for perceptions of physical things, or pictures for geometric definitions, etc).
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-14T11:44:17.747Z · LW(p) · GW(p)
How about telling me whether you actually had?
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
I don't see where you are going with that. If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code. But M's R proposes that there is something you can get from seeing a colour yourself. The analogy doesnt seem to be there. Unless you disagree with the intended conclusion of M's R.
Replies from: hairyfigment, Spinning_Sandwich↑ comment by hairyfigment · 2012-09-15T02:01:29.971Z · LW(p) · GW(p)
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code.
This seems trivially false. See also the incomputability of pure Solomonoff induction.
Likewise, I see no reason to expect that a mathematical process could look at a symbolic description of itself and recognize it with intuitive certainty. We have some reason to think the opposite. So why expect to recognize "qualia" from their descriptions?
As orthonormal points out at length, we know that humans have unconscious processing of the sort you might expect from this line of reasoning. We can explain how this would likely give rise to confusion about Mary's Room.
Replies from: wedrifid, Peterdjones↑ comment by wedrifid · 2012-09-15T03:33:15.310Z · LW(p) · GW(p)
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code.
This seems trivially false.
The implicit assumption I inferred from the claim made it:
If you are a superscientist, there is nothing you can learn from running a programme [for some given non-infinite time] that you cannot get from examining the code [for a commensurate period of subjective time, including allowance for some computational overhead in those special cases where abstract analysis of the program provides no compression over just emulating it]."
That makes it trivially true. The trivially false seems to apply only when the 'run the program' alternative gets to do infinite computation but the 'be a superscientist and examine the program" doesn't.
Replies from: Peterdjones, hairyfigment↑ comment by Peterdjones · 2012-09-18T20:26:54.768Z · LW(p) · GW(p)
My thoughts exactly.
↑ comment by hairyfigment · 2012-09-15T04:50:40.073Z · LW(p) · GW(p)
The trivially false seems to apply only when the 'run the program' alternative gets to do infinite computation
'If the program you are looking at stops in less than T seconds, go into an infinite loop. Otherwise, stop.' In order to avoid a contradiction the examiner program can't reach a decision in less than T seconds (minus any time added by those instructions). Running a program for at most T seconds can trivially give you more info if you can't wait any longer. I don't know how much this matters in practice, but the "infinite" part at least seems wrong.
And again, the fact that the problem involves self-knowledge seems very relevant to this layman. (typo fixed)
Replies from: wedrifid, Peterdjones↑ comment by Peterdjones · 2012-09-18T20:28:25.574Z · LW(p) · GW(p)
Running a program for at most T seconds can trivially give you more info
More info than what? Are you assuming that inspection is equivalent to one programme cycle, or something?
Replies from: hairyfigment↑ comment by hairyfigment · 2012-09-19T20:03:08.327Z · LW(p) · GW(p)
More info than inspecting the code for at most T seconds. Finite examination time seems like a reasonable assumption.
I get the impression you're reading more than I'm saying. If you want to get into the original topic we should probably forget the OP and discuss orthonormal's mini-sequence.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T00:00:59.045Z · LW(p) · GW(p)
More info than inspecting the code for at most T seconds.
More info than who or what inspecting the code? We are talking about superscientists here.
Replies from: hairyfigment↑ comment by hairyfigment · 2012-09-20T06:18:57.051Z · LW(p) · GW(p)
I no longer have any clue what we're talking about. Are superscientists computable? Do they seem likely to die in less than the lifespan of our (visible) universe? If not, why do we care about them?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-20T10:27:21.854Z · LW(p) · GW(p)
The point is that you can't say a person of unknown intelligence inspecting code for T seconds will necessarily conclude less than a computer of unknown power running the code for T seconds. You are comparing two unknowns.
↑ comment by Peterdjones · 2012-09-18T20:30:26.737Z · LW(p) · GW(p)
So why expect to recognize "qualia" from their descriptions?
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display? Blind spots don't look like anything. Not even perceivable gaps in the visual field.
Replies from: hairyfigment↑ comment by hairyfigment · 2012-09-19T20:14:54.757Z · LW(p) · GW(p)
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display?
What.
(Internal state seems a little misleading. At the risk of getting away from the real discussion again, Peano arithmetic is looking at a coded representation of itself when it fails to see certain facts about its proofs. But it needs some such symbols in order to have any self-awareness at all. And there exists a limit to what any arithmetical system or Turing machine can learn by this method. Oh, and the process that fills my blind spots puts on colorful displays all the time.)
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-19T23:56:54.413Z · LW(p) · GW(p)
There is no evidence that PA is self aware.
and the process that fills my blind spots puts on colorful displays all the time.)
So your blind spot is filled in by other blind spots?
↑ comment by Spinning_Sandwich · 2012-09-14T11:57:31.084Z · LW(p) · GW(p)
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from >examining the code.
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
In case I haven't driven the point home with enough clarity (for example, I did read the link the first time you posted it), I am claiming that there is something to experiencing the program/novel/world inasmuch as there is something to experiencing colors in the world. Whether that something is a subset of the code/words/physics or something additional is the whole point of the problem of qualia.
And no, I don't have a clear idea what a satisfying answer might look like.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-14T12:19:18.934Z · LW(p) · GW(p)
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
That doesn't follow. Figuring out the behaviour of a programme is just an exercise in logical deduction. It can be done by non-superscientists in easy cases, so it is just an extension of the same idea that a supersceintist can handle difficult cases. However, there is no "easy case" of deducing a perceived quality from objective inormation.
Beyond that, if all you are saying is that the problem of colours is part of a larger problem of qualia, which itself is part of a larger issue of experience, I can answer with a wholehearted "maybe". That might make colour seem less exceptional and therefore less annihilaion-worthy, but I otherwise don't see where you are going.
Replies from: Spinning_Sandwich↑ comment by Spinning_Sandwich · 2012-09-14T12:30:41.557Z · LW(p) · GW(p)
I'm not just talking about behavior. The kinds of things involved in experiencing a program involve subjective qualities, like whether Counter-Strike is more fun than Day of Defeat, which maybe can't be learned just from reading the code.
It's possible the analogy is actually flawed, and one is contained in its underlying components while the other is not, but I don't understand how they differ if they do, or why they should.
↑ comment by common_law · 2012-09-18T22:18:44.367Z · LW(p) · GW(p)
we do all see roughly the same thing: we've got pretty much the same sensory organs & brains to process what is roughly the same data. It seems reasonable to expect that most members of a given species should experience roughly the same picture of the world.
To my disappointment, David Papineau concluded the same, but we can't compare differences in pictures of the world to differences in the brain structure or function because we can have only a single example of a "picture of the world." "Pretty much the same sensory organs & brains" is useless because of its vagueness.
So much for the first problem, at least in brief & from a pragmatic point of view. The skeptical philosopher must admit that this is a silly problem to demand a decisive answer to.
To the contrary, the qualia problem is exactly the sort of problem to which philosophy can provide a decisive answer. For example, that we can't frame the qualitative differences between persons conceptually should lead philosophers to doubt the coherence of the qualia concept.
Does perhaps the notion that innate concepts might be incoherent create confusion?
comment by Trevor_Caverly · 2012-09-14T04:30:37.542Z · LW(p) · GW(p)
Is your position the same as Dennett's position (summarized in the second paragraph of synopsis here) ?
Replies from: metaphysicist, metaphysicist↑ comment by metaphysicist · 2012-09-19T19:37:04.733Z · LW(p) · GW(p)
Is your position the same as Dennett's position (summarized in the second paragraph of synopsis here)?
Let me try to answer more succinctly. Dennett and I are concerned with different problems; Dennett's is a problem within science proper, while mine is traditionally philosophical. Dennett's conclusion is that "qualia" don't provide introspective access to the functioning of the brain; my conclusion is that our common intuition concerning the existence of qualia is incoherent.
↑ comment by metaphysicist · 2012-09-18T09:05:24.950Z · LW(p) · GW(p)
Is your position the same as Dennett's position (summarized in the second paragraph of synopsis here) ?
I agree with Dennett that qualia don't exist. I disagree that the concept of qualia is basically a remnant of an outmoded psychological doctrine; I think it's an innate idea.
Dennett can be criticized for ignoring the subjective nature of qualia. He shows, for example, that reported phenomenal awareness is empirically bogus in that it doesn't correspond to the contents of working memory. I'm concerned with accounting for the subjective nature of the qualia concept.
Dennett basically thinks qualia are empirically falsifiable; I think the concept is incoherent.
comment by thomblake · 2012-09-17T21:36:00.656Z · LW(p) · GW(p)
It seems like 2c is in tension with 3b. The private-language problem ought to tell us that even if raw experiences exist, then we should not expect to have words to describe raw experience. But then, the lack of those words is in no way evidence that raw experiences do not exist, so 2c fails as an explanation.
Replies from: None, common_law↑ comment by [deleted] · 2012-09-17T21:48:18.976Z · LW(p) · GW(p)
I think we should assume from the outset that qualia are necessarily intensional, especially if we want them to play some epistemically foundational role, which is typically why they're invoked. If qualia have to be intensional, and the private language argument bars our associating any concepts with them then the private language argument contradicts the possibility of qualia. Not having concepts with which to talk or think about qualia means that we couldn't ever be aware of anything like a 'green' or 'painful' quale.
Replies from: thomblake↑ comment by common_law · 2012-09-18T08:27:49.474Z · LW(p) · GW(p)
The private-language problem ought to tell us that even if raw experiences exist, then we should not expect to have words to describe raw experience.
Wittgenstein's private-language argument, if sound, would obviate 2c. But 3b is based on Wittgenstein's account not being successful in explaining the absence of private language. It claims to be a solution to the private-language problem, recognizing that Wittgenstein was unsuccessful in solving it.
comment by Mitchell_Porter · 2012-09-14T09:30:08.179Z · LW(p) · GW(p)
The problem of color is for materialists what the problem of evil is for theists: it's the overwhelming fact that they can't help tripping over, but which they also can't bring themselves to take seriously. There is no necessary inconsistency in either case; you could have a theism in which God isn't good, or a materialism in which colors exist. But no, the existing concept (of God, of physics) has to be basically right; so all the creative energy goes into rationalizing that belief.
Replies from: DaFranker, ArisKatsaris, Spinning_Sandwich, None↑ comment by DaFranker · 2012-09-14T18:39:42.001Z · LW(p) · GW(p)
The problem of color is the problem of anthropomorphism.
In reductionist materialism, the "qualia" and "experience" of color is merely an internally-consistent, self-reinforcing creation of the animal brain that assigned specific neural values to specific inputs sent by specific cells that react to specific light wavelengths in some reality-entangled manner.
In this philosophy, we only perceive "color" as a "special experience" because we do not realize that the same is true for all of our senses, and that the same would be true of any other physically-possible "sense", and that some new incredible "qualia" would be literally created (gasp, you sinful blasphemer!) if we artificially created a new "sense" through modification of the human brain.
In summary: The "magical yellowness" qualia of yellow that feels like it can't possibly be merely information is actually created by your brain. It is "real" in that without this the yellowness would merely be knowledge of wavelengths, not yellowness-experience, but it is still created wholepiece by the brain, not by some light shiny from outside the universe.
In addition, this hypothesis is definitely testable. I made a claim above. Create a new sensory input / type of stimuli, and we will perceive a "new" qualia that was never perceived before, just like colorblind people that have never seen color and don't have any idea what you're talking about who would suddenly be able to see colors.
Edit: I would stake out further and go so far as to claim, though this is not an easy hypothesis to test and falsify by any stretch and might not even be doable within my natural lifetime, that there is a tangible explanation for the particular properties (this is a magiclike unknown-explanation stopsign) of the experiences of our senses - of why sound feels and is experienced the way it does and is, why colors feel and are experienced the way they do and are. I would also posit a correlation between the feeling and the qualia-seeming experiences. All of this to posit the hypothesis that we could not only create new qualia, but even create new qualia with specific "kinds" of experience-qualia-ness, like creating a new sense that both feels and is experienced somewhere in-between colors and the 400hz sound in the n-space of "qualia".
Replies from: Richard_Kennaway, Richard_Kennaway, Peterdjones↑ comment by Richard_Kennaway · 2012-09-16T23:06:24.345Z · LW(p) · GW(p)
In addition, this hypothesis is definitely testable. I made a claim above. Create a new sensory input / type of stimuli, and we will perceive a "new" qualia that was never perceived before, just like colorblind people that have never seen color and don't have any idea what you're talking about who would suddenly be able to see colors.
There have been cases of people blind from birth who, by some medical treatment were enabled to see. No references to hand, but Oliver Sacks probably writes about this somewhere. They clearly get new qualia, which are moreover clearly different from those who were sighted from birth.
ETA: Wikipedia article on recovery from blindness.
Replies from: DaFranker↑ comment by DaFranker · 2012-09-17T13:15:40.569Z · LW(p) · GW(p)
I thought to use this too, but I was once or twice given the argument that blind people who are made to see are only "accessing" a Given-By-External-Power-To-Humans-At-Birth qualia from outside reality - the argument Eliezer tried to take down in the metaethics sequence about morality being "a light shining from outside" that humans just happen to find and match, applied to qualia. It's a very good stopsign and/or FGC, apparently.
Because of this, I looked for a more definitive test that these philosophies - those that would discard "creating" sight as a valid new qualia - do not predict (and arguably, cannot, in terms of probability mass, if they want to remain coherent).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-09-17T14:07:17.726Z · LW(p) · GW(p)
I thought to use this too, but I was once or twice given the argument that blind people who are made to see are only "accessing" a Given-By-External-Power-To-Humans-At-Birth qualia from outside reality
Surely that argument is refuted by the fact that the newly sighted do not receive the same qualia as the always-sighted? Instead, they get pretty much the experiences you might predict given what we know about the importance of early experience for the developing faculties: confusion overcome only imperfectly and with difficulty, and with assistance from their more developed senses.
The idea that they received something at birth that they have difficulty accessing has the same problem as the idea that the brain is merely a physical interface through which the soul interacts with the world: all the data are just as consistent with the simpler hypothesis that the brain is the whole story. (That includes the data that there are experiences, which is a difficulty for both materialism, and materialism with the magic word "soul" added.)
Replies from: DaFranker, bogus↑ comment by DaFranker · 2012-09-17T16:02:48.158Z · LW(p) · GW(p)
Surely that argument is refuted by the fact that the newly sighted do not receive the same qualia as the always-sighted?
Yes, it is, when you accept the evidence you've given as valid and can weight arguments based on their probability logic. Denial mechanisms in place will usually prevent proponents of the argument from recognizing the refutation as a valid one. Lots of difficult argumentation and untangling of webs of rationalizations ensues (and arguing by the Occam's Razor route is even less practical, because in their model, their hypotheses of soul or outer-light or what-have-you is simpler when other parts of their model of the whole world are taken into account, which means even more knots to untangle).
I seek to circumvent that debate entirely by putting the burden of proof on my own "side", for several reasons, some of which are tinted a slight shade of gray closer to the Dark Arts than I would like.
↑ comment by bogus · 2012-09-17T14:33:41.338Z · LW(p) · GW(p)
(That includes the data that there are experiences, which is a difficulty for both materialism, and materialism with the magic word "soul" added.)
I dont't think this is correct. The phenomenology of subjective exprience suggests that such experiences should be "simple" in a sense - sort of like a bundle of tiny little XML tags attached to the brain. Of course, this is not to argue that our brain parts literally have tiny little XML tags attached to them, any more than other complex objects do. But it does suggest that they might be causally connected to some other, physically simpler phenomena.
↑ comment by Richard_Kennaway · 2012-09-16T23:19:22.633Z · LW(p) · GW(p)
In this philosophy, we only perceive "color" as a "special experience" because we do not realize that the same is true for all of our senses
Indeed, all of our senses, by definition, have qualia, and colour is just a particularly striking example. It is interesting, though, to note that not all brain tissue produces qualia: the cerebellum operates without them. Our motor control (what the cerebellum primarily does) proceeds without qualia -- we have almost no awareness of what we are doing with individual muscles. This is why all forms of teaching people how to move, whether physiotherapy, dance training, martial arts, sports, and so on, make a lot of use of indirect visualisation to produce the desired results. (These can easily be mistaken, sometimes by the teachers themselves, for literal descriptions, e.g. of "chi" or "energy".) Golfers are taught to "follow through", even though nothing that happens after the moment of impact can have any effect on the ball. It is the intention to follow through that changes how the club is swung, and how it impacts the ball, in a way that could not be achieved by any more direct instruction.
↑ comment by Peterdjones · 2012-09-21T18:08:29.036Z · LW(p) · GW(p)
In this philosophy, we only perceive "color" as a "special experience" because we do not realize that the same is true for all of our senses
Ye-e-e-s, but the standard qualiaphilic take is that all the other sense are problematic as well. You think you are levelling down, but you are levelling up.
In addition, this hypothesis is definitely testable. I made a claim above. Create a new sensory input / type of stimuli, and we will perceive a "new" qualia that was never perceived before,
That isn't a test of reductionism, etc, since many of the alternatives make the same prediction. For instance, David Chalmer's theory that qualia are non-physical properties that supervene on the physical properties of the brain.
Replies from: DaFranker, TheOtherDave↑ comment by DaFranker · 2012-09-21T19:25:00.750Z · LW(p) · GW(p)
That isn't a test of reductionism, etc, since many of the alternatives make the same prediction. For instance, David Chalmer's theory that qualia are non-physical properties that supervene on the physical properties of the brain.
True, it isn't a particularly specific test that supports all the common views of most LW users. That is not its intended purpose.
The purpose is to establish that "qualia" are not ontologically basic building blocks of the universe sprung into existence alongside up-quarks and charmings for the express purpose of allowing some specific subset of possible complex causal systems to have more stuff that sets them apart from other complex causal systems, just because the former are able to causally build abstract model of parts of their own system and would have internal causal patterns abstractly modeled as "negative reinforcement" that they causally attempt to avoid being fired if these aforementioned "qualia" building blocks didn't set them apart from the latter kind of complex systems...
... but I guess it does sound kind of obviously silly when you phrase it from a reductionist perspective.
Replies from: Peterdjones, ArisKatsaris↑ comment by Peterdjones · 2012-09-21T19:54:49.227Z · LW(p) · GW(p)
The purpose is to establish that "qualia" are not ontologically basic building blocks of the universe sprung into existence alongside up-quarks and charmings for the express purpose of allowing some specific subset of possible complex causal systems to have more stuff that sets them apart from other complex causal systems,
But it doesn't. It just establishes that if they, they covary with physical states in the way that would be expected from identity theory. Admitedly it seems redundant to have a non physical extra ingredient that nonetheless just shadows what brains are doing physicallly. I think that's a flaw in Chalmers' theory. But its conceptual, not empirical.
Replies from: DaFranker↑ comment by DaFranker · 2012-09-21T23:23:29.903Z · LW(p) · GW(p)
It just establishes that if they, they covary with physical states in the way that would be expected from identity theory.
I... err... what? My mastery of the English language is insufficient to compute the meaning of the I-assume-is-a sentence above.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-23T14:15:07.689Z · LW(p) · GW(p)
I meant
"It just establishes that if they exist, they covary with physical states in the way that would be expected from identity theory."
But thats not the whole problem. It establishes they covary with physical states in the way that would be expected from identity theory, and Chalmerserian dualism, and a bunch of other theories (but maybe not Cartesian dualism).
Tests need to distinguish between theories, and yours doesn't.
Replies from: DaFranker↑ comment by ArisKatsaris · 2012-09-21T19:40:01.847Z · LW(p) · GW(p)
The purpose is to establish that "qualia" are not ontologically basic building blocks of the universe sprung into existence alongside up-quarks and charmings
Since qualia describe an event (in a sense), I think that if they're ever found to have measurable existence, they'll not be so much what a gluon is to "top-quark", but more something like what division is to the real numbers...
Replies from: DaFranker↑ comment by TheOtherDave · 2012-09-21T19:22:40.974Z · LW(p) · GW(p)
Is there a short explanation of why I ought to reject an analogous theory that algorithms are non-physical properties that supervene on the physical properties of systems that implement those algorithms?
Or, actually, backing up... ought I reject such a theory, from Chalmer et al's perspective? Or is "1+1=2" a nonphysical property of certain systems (say, two individual apples placed alongside each other) in the same sense that "red" is?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T20:06:55.302Z · LW(p) · GW(p)
Is there a short explanation of why I ought to reject an analogous theory that algorithms are non-physical properties that supervene on the physical properties of systems that implement those algorithms?
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
Replies from: None, bogus, TheOtherDave↑ comment by [deleted] · 2012-09-21T20:37:05.900Z · LW(p) · GW(p)
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
Now I'm confused: what you just said is a description of a 'supervenient' relation. Are you saying that anytime X is said to supervene on Y, we should reject the theory which features X's?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-23T14:23:35.369Z · LW(p) · GW(p)
No. Supervence is an ontologically neutral relationship. In Chalmer's theory, qualia supervene on brain states, so novel brain states will lead to novvel qualia. In identity theory, qualia superven on brain states, so ditto. So the Novel Qualia test does not distinguish the one from the other. The argument for qualia being non-physical properties, as opposed to algorithms, is down to their redubility, or lack thereof, not supervenience.
↑ comment by bogus · 2012-09-21T20:47:07.660Z · LW(p) · GW(p)
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
This is not really true, at least without adding some pretty restrictive conditions. By using "joke interpretations", as pointed out by Searle and Putnam, one could assert that a huge number of "algorithms" supervene on any large-enough physical object.
↑ comment by TheOtherDave · 2012-09-21T20:34:52.214Z · LW(p) · GW(p)
Are they?
I mean, sure, the fact that a circuit implementing the algorithm "1+1=2" returns "2" given the instruction to execute "1+1" is entirely predictable, much as the fact that a mouse conditioned to avoid red will avoid a red room is predictable. Absolutely agreed.
But as I understand the idea of qualia, the claim is that the mouse's predictable behavior with respect to a red room (and the neural activity that gives rise to it) is not a complete description of what's going on... there is also the mouse's experience of red, which is an entirely separate, nonphysical, fact about the event, which cannot be explained by current physics even in principle. (Or maybe it turns out mice don't have an experience of red, but humans certainly do, or at least I certainly do.) Right?
Which, OK. But I also have the experience of seeing two things, just like I have the experience of seeing a red thing. On what basis do I justify the claim that that experience is completely described by a description of the physical system that calculates "2"? How do I know that my experience of 2 isn't an entirely separate nonphysical fact about the event which cannot be explained by current physics even in principle?
↑ comment by ArisKatsaris · 2012-09-14T09:56:40.980Z · LW(p) · GW(p)
Like Spinning_Sandwich, I don't think that color is qualitatively different in its problematic-ness than e.g. pitch of sound, or perception of geometry, or even memory of a smell, or any other aspect of consciousness.
Color just serves as the easiest referrenced example of the mystery because colors feel as largely an irreducible sensation (you can perhaps reduce it to two separate sensations of hue+brightness, or something like that, but not much further, like one might do by trying to reduce geometry to points and numbers)
↑ comment by Spinning_Sandwich · 2012-09-14T09:50:17.747Z · LW(p) · GW(p)
I don't see how colors in particular are a problem for materialism any more than consciousness itself is. I certainly fail to see how it's equivalent to the problem of evil for theists of the "God is good" bent.
Could you explain in a bit more detail how the problem of evil parallels this? And I mean excruciating detail, if possible, because I really haven't a clue what you're getting at.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-14T10:44:07.032Z · LW(p) · GW(p)
I don't know about excruciating detail, but I think the general idea is this:
One would not predict the existence of evil in a universe created by a benevolent God.
One would not predict the existence of intrinisically subjective qualities in an entirely physcial, and therefor entirely objective, universe.
Replies from: None, Spinning_Sandwich↑ comment by [deleted] · 2012-09-14T17:28:31.429Z · LW(p) · GW(p)
One would not predict the existence of intrinisically subjective qualities in an entirely physcial, and therefor entirely objective, universe.
Disagree.
Let's look at the actual observations. I see red, It has some atomic "redness" that is different from the atomic "blueness" of blue and the atomic pleasure of orgasm and the atomic feeling of cold. Each of these atomic "qualia" are subjectively irreducible. There are not smaller parts that my subjective experience of "red" is made up of.
Is this roughly the qualia problem? That's my understanding of it.
Here's a simple computer program that reports on whether or not it has atomic subjective experience:
qualia = {"red", "blue", "cold", "pleasure"}
memory_associations = {red = {"anger", "hot"}, blue = {"cold", "calm"},
pleasure = {"hot", "good"}}
function experience_qualia(input)
for _, q in ipairs(qualia) do
if input == q then
print("my experience of", input, "is the same as", q)
else
print(q, "and", input, "feel different")
end
end
print("furthermore, the feeling of", input, "seems connected to")
print(table.concat(memory_associations[input], " and "))
print("I have no way of reducing these experiences, therefore I exist outside physics")
end
experience_qualia"red"
experience_qualia"blue"
From the inside, the program experiences no mechanisms of reduction of these atomic qualia, but from the outside, we can see that they are strings, made up of bytes, and compared by hash value. While I don't know the details of the neurosceince of qualia, I expect the findings to be roughly similar. Something will be an irreducible symbol with various associations and uniqueness from within the system, but outside, we will be able to see "oh look, redness is this particular pattern of neurons firing".
EDIT: LW killed my program formatting. It should still run (lua, by the way)
Replies from: MBlume, arundelo, ArisKatsaris, shminux, Peterdjones↑ comment by MBlume · 2012-09-14T17:38:36.091Z · LW(p) · GW(p)
Having never seen any Lua, I'm surprised by how much it looks like Python. Any idea whether Python stole its set literals from Lua?
ETA: Python port (with output)
Replies from: None, None↑ comment by [deleted] · 2012-09-14T17:42:57.918Z · LW(p) · GW(p)
python:
{'x': y}
['x', 'y']
Lua:
{x='y'}
{'x','y'}
Also, lots of syntax differences (end, then, do, function, whitespace, elseif, etc). They are similar in that they are dynamic languages. I don't think anything was particularly inspired by anything else.
Replies from: MBlume↑ comment by [deleted] · 2012-09-14T18:45:21.786Z · LW(p) · GW(p)
thanks for the port.
Next up we should extend it with free will and true knowledge (causal entanglement).
And I think someone asked about not demonstrating qualia sameness in the absence of truthful reporting.
(I'm not going to waste more time on any of this, but it could be done)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-14T19:22:20.202Z · LW(p) · GW(p)
If you mean this... to be clear, I didn't complain about it not demonstrating "qualia sameness". I complained (implicitly) that the claim that it demonstrated all the properties that some people claim demonstrate qualia in real-world systems (like people) was demonstrably false.
(In particular, that it didn't demonstrate anything persistent across different reporting, whereas my own experience does demonstrate something persistent across different reporting.)
I agree that actually recoding it to demonstrate such persistence is a waste of time; far simpler is to not make such over-reaching claims.
Replies from: None↑ comment by [deleted] · 2012-09-14T19:26:14.176Z · LW(p) · GW(p)
I removed "complained".
I agree that actually recoding it to demonstrate such persistence is a waste of time; far simpler is to not make such over-reaching claims.
Point taken. As I tried to explain somewhere, it was all the properties that I thought of at the moment, with the implicit assertion that the rest of the properties could be demonstrated as required.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-14T19:27:56.058Z · LW(p) · GW(p)
Point taken.
↑ comment by ArisKatsaris · 2012-09-14T17:37:29.654Z · LW(p) · GW(p)
"From the inside, the program experiences no mechanisms of reduction of these atomic qualia"
Materialism predicts that algorithms have an "inside"?
As a further note, I'll have to say that if all the blue and if the red in my visual experience were switched around, my hunch tells me that I'd be experiencing something different; not just in the sense of different memory associations but that the visual experience itself would be different. It would not just be that "red" is associated with hot, and that "blue" is associated with cold... The qualia of the visual experience itself would be different.
Replies from: None, FAWS↑ comment by [deleted] · 2012-09-14T17:49:34.667Z · LW(p) · GW(p)
Materialism predicts that algorithms have an "inside"?
Yes. The scene from within a formal system (like algebra) has certain qualities (equations, variables, functions, etc) that are different from the scene outside (markings on paper, equals sign, BEDMAS, variable names, brackets for function application).
That's not really a materialism thing, it's a math thing.
As a further note, I'll have to say that if all the blue and if the red in my visual experience were squitched around, my hunch tells me that I'd be experiencing something different; not just in the sense of different memory associations but that the visual experience itself would be different. It would not just be that "red" is associated with hot, and that "blue" is associated with cold... The qualia of the visual experience itself would be different.
Hence the part where they are compared to other qualia. Maybe that's not enough, but imagining getting "blue" or "sdfg66df" instead of "red" (which is the evidence you are using) is of course going to return "they are different" because they don't compare equal. Even if the output of the computation ends up being the same.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-14T17:59:46.694Z · LW(p) · GW(p)
That's not really a materialism thing, it's a math thing.
I'm under the impression that what you describe falls under computationalism, not materialism, but my reading on these ideas is shallow and I may be confusing some of these terms...
Replies from: None↑ comment by FAWS · 2012-09-17T16:21:40.738Z · LW(p) · GW(p)
That thought experiment doesn't make much sense. If the experiences were somehow switched, but everything else kept the same (i .e all your memories and associations of red are still connected to each other and everything else in the same way) you wouldn't notice the difference; everything would still match your memories exactly. If there even is such a thing as raw qualia there is no reason to suppose they are stable from one moment to the other; as long as the correct network of associations is triggered there is no evolutionary advantage either way.
↑ comment by Shmi (shminux) · 2012-09-14T17:42:38.360Z · LW(p) · GW(p)
It should still run (lua, by the way)
I could not find an online Lua-bin, but pasting it into a Lua Demo and clicking Run does the trick.
Replies from: None↑ comment by Peterdjones · 2012-09-14T18:05:10.381Z · LW(p) · GW(p)
There's no evidence that your programme experiences anything from the inside. Which is one way in which your claim is surreptitiously eliminativist. Another is that, examined from the outside, we can tell what the programme's qualia are: they are nothing. They have no quaities other than being different from one another. But qualia don't seem like that from the inside! You say your programme's qualia are subjective because it can't examine their internal structure...but there ins't any. They are not subjective somethings, they are just nothings.
Replies from: None↑ comment by [deleted] · 2012-09-14T18:11:39.639Z · LW(p) · GW(p)
There's no evidence that your programme experiences anything from the inside.
then neither is there evidence that I do, or you do.
they are nothing. They have no quaities other than being different from one another.
I can't think of qualities that my subjective experience of "red" has that the atom "red" does not have in my program.
But qualia don't seem like that from the inside!
Sure they do. Redness has this unique redness to it the same way "red" has this unique ness.
your programme's qualia are subjective because
I was using "subjective" as a perspective, not a quality.
can't examine their internal structure...but there ins't any.
Sure there is. Go look in the lua source code. there is the global string memo-table, GC metadata, string contents (array of bytes), type annotations, etc.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T12:37:16.848Z · LW(p) · GW(p)
then neither is there evidence that I do, or you do.
I have plenty of evidence of my own experiences. Were you restricting "evidence" to third-person, objective evidence?
I can't think of qualities that my subjective experience of "red" has that the atom "red" does not have in my program.
I can. I think that if I experienced nothing but an even expanse of red, that would be different from experiencing nothing but a salty taste, or nothing but middle C
Sure they do. Redness has this unique redness to it the same way "red" has this unique ness.
Redness isn't expressible. "Object at 0x8cf643" is.
Your programme's qualia are subjective because can't examine their internal structure...but there ins't any.
Sure there is. Go look in the lua source code. there is the global string memo-table, GC metadata, string contents (array of bytes), type annotations, etc
If that's accessible to them, it's objective and expressible. If not, its just a nothing. Neither way do you have a "somethng" that is subjective.
↑ comment by Spinning_Sandwich · 2012-09-14T11:05:13.972Z · LW(p) · GW(p)
I wouldn't predict the existence of self-replicating molecules either. In fact, I'm not sure I'm in a position to predict anything at all about physical phenomena without appealing to empirical knowledge I've gathered from this particular physical world.
It's a pickle, all right.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-14T11:10:36.445Z · LW(p) · GW(p)
OK: "does not predict" was not strong enough. In each case, the opposite is predicted.
↑ comment by [deleted] · 2012-09-14T17:03:24.779Z · LW(p) · GW(p)
I can write a computer program that experiences qualia to the same extent that I do. What confusing thing is left?
Evil is a problem because the benevolent god hypothesis predicts its non-existence. Qualia is not a problem; materialism adequately explains all aspects of it, except the exact neuroscience details.
Replies from: selylindi, Richard_Kennaway↑ comment by selylindi · 2012-09-14T17:15:52.017Z · LW(p) · GW(p)
I can write a computer program that experiences qualia to the same extent that I do.
Please do so and publish.
Replies from: None↑ comment by [deleted] · 2012-09-14T17:34:35.041Z · LW(p) · GW(p)
See my other comment in this thread for the code.
It's very simple, and it's not an AI, but it's qualia have all the properties that mine seem to have.
Replies from: TheOtherDave, selylindi↑ comment by TheOtherDave · 2012-09-14T17:45:29.637Z · LW(p) · GW(p)
All the properties?
Huh.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
I see nothing in your sample code that is capable of supporting that behavior -- that is, your code either reports the experience or it doesn't, but there's no second thing that can either align with the report or conflict with it, or that can be shared between two runs of the program one of which reports the experience and one of which doesn't.
I conclude that my experience of perceiving inputs has relevant properties that your sample code does not.
I suspect that's true of everyone else, as well.
Replies from: None↑ comment by [deleted] · 2012-09-14T17:59:27.736Z · LW(p) · GW(p)
All the properties?
All the ones I though of in the moment.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
Once you put in the functionality that it can lie about what it's experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for.
You could record that that sameness was there by remembering previous inputs and looking at those.
shared between two runs of the program one of which reports the experience and one of which doesn't.
This is a different issue, analogous to whether my "red" and your "red" are the same. From the inside, we'd feel some of th same things (stop sign, aggressiveness, hot) but then some different things (that apple I ate yesterday). From the outside, they are implemented in different chunks of flesh, but may or may not have analogous patterns that represent them.
Once you can clearly specify what question to ask, I think the program can answer it and will have the same conclusion you do.
I hold that qualia are opaque symbols.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-14T18:45:33.033Z · LW(p) · GW(p)
Once you put in the functionality that it can lie about what it's experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for..
I hold that qualia are opaque symbols.
But your problem is that their opacity in your original example hinges on their being implemented in a simple way. You need to find a way of upgrading the AI to be a ealistic experiencer without adding describable structure to its "qualia".
Replies from: None↑ comment by [deleted] · 2012-09-14T18:55:24.338Z · LW(p) · GW(p)
Not sure what you are getting at.
You can make it as opaque or transparent as you want by only exposing a certain set of operation to the outside system (equality, closeness (for color), association). I could have implemented color as tuples ({1,0,0} being red). I just used strings because someone already did the work.
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations. I just pretended that those operations weren't available (most of the restrictions you make in a program are pretend). I'll admit I didn't do a very good job of drawing the line between the thing existing in the system,and the system itself. But that could be done with more architecting.
Replies from: Peterdjones, Peterdjones↑ comment by Peterdjones · 2012-09-18T12:40:58.856Z · LW(p) · GW(p)
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations
Well, the original idea used CLISP GENSYMs.
↑ comment by Peterdjones · 2012-09-14T19:06:07.598Z · LW(p) · GW(p)
So how do you ensure the outside system is the one doing the experiencing? After all, everything really happens at the hardware level. You seemed to have substutued an easier problem: you have ensured that the outside sytem is the one doing the reporting.
Replies from: None↑ comment by [deleted] · 2012-09-14T19:13:03.838Z · LW(p) · GW(p)
How do you know that you are doing the experiencing? It's because the system you call "you" is the one making the observations about experience.
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
Of course once the architectural details are allowed to affect what you think of the system, everything goes a bit mushy. What if I'd written it in haskell (lazy, really nonstandard evaluation order)? What if I never ran the program (I didn't)? What if I ran it twice?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-18T20:38:27.908Z · LW(p) · GW(p)
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
And which one is that? Both the software and the hardware could be said to be. But your compu-qualia are accessible to the one, but not the other!
What if I'd written it in haskell
Haskell doens't do anything. Electrons pushing electrons does things.
Replies from: None↑ comment by selylindi · 2012-09-14T18:10:07.912Z · LW(p) · GW(p)
Um, that program has no causal entanglement with 700nm-wavelength light, 470nm-wavelength light, temperature, or a utility function. I am totally unwilling to admit it might experience red, blue, cold, or pleasure.
Replies from: MBlume, None↑ comment by MBlume · 2012-09-14T18:13:41.308Z · LW(p) · GW(p)
If I upload you and stimulate your upload's "red" cones, you'll have red qualia, without any 700nm light involved (except for the 700nm light which gave rise to your mind-design which I copied etc., but if you're talking about entanglement that distant, than nyan_sandwich was also entangled with 700nm light before writing the code)
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-14T18:18:15.637Z · LW(p) · GW(p)
If I upload you and stimulate your upload's "red" cones, you'll have red qualia
No need for uploading, electrodes in the brain do the trick.
Replies from: MBlume↑ comment by MBlume · 2012-09-14T18:19:49.777Z · LW(p) · GW(p)
...that really should have occurred to me first.
Replies from: selylindi↑ comment by selylindi · 2012-09-14T18:37:54.216Z · LW(p) · GW(p)
Yes, my experience of redness can come not only from light, but also from dreams, hallucinations, sensory illusions, and direct neural stimulation. But I think the entanglement with light has to be present first and the others depend on it in order for the qualia to be there.
Take, for example, the occasional case of cochlear implants for people born deaf. When the implant is turned on, they immediately have a sensation, but that sensation only gradually becomes "sound" qualia to them over roughly a year of living with that new sensory input. They don't experience the sound qualia in dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation) until after their brain is adapted to interpreting and using sound.
Or take the case of tongue-vision systems for people born blind. It likewise starts out as an uninformative mess of a signal to the user, but gradually turns into a subjective experience of sight as the user learns to make sense of the signal. They recognize the experience from how other people have spoken of it, but they never knew the experience previously from dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation).
In short, I think the long-term potentiation of the neural pathways is a very significant kind of causal entanglement that is not present in the program under discussion.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-09-14T23:56:35.983Z · LW(p) · GW(p)
I think the entanglement with light has to be present first and the others depend on it in order for the qualia to be there.
What if you're a brain in a vat, and you've grown up plugged into a high-resolution World of Warcraft? If qualia are wholly inside the skull, their qualitative character can't depend on facts outside the skull.
Replies from: Lightwave↑ comment by Lightwave · 2012-09-17T09:12:31.038Z · LW(p) · GW(p)
Well you need some input to the brain, even if it's in a vat. Something has to either stimulate the retina or stimulate the relevant neurons further down the line. At least during some learning phase.
Or I guess you could assemble a brain-in-a-vat with memories built-in (e.g. the memory of seeing red). Thus the brain will have the architecture (and therefore the ability) to imagine red.
↑ comment by [deleted] · 2012-09-14T18:15:04.548Z · LW(p) · GW(p)
I can't tell if you are joking.
We could give it all those things. Machine vision is easy. A temperature measurement is easy. A pleasure-based reward system is easy (bayesian spam filter).
Utility functions are unrelated to pleasure. (We could make it optimize too tho, if you want. Give it free-will to boot)
Replies from: selylindi↑ comment by Richard_Kennaway · 2012-09-16T23:25:41.302Z · LW(p) · GW(p)
Qualia is not a problem; materialism adequately explains all aspects of it, except the exact neuroscience details.
"Some factors are still missing, like the expression of the people's will..."